id
stringlengths
10
10
title
stringlengths
3
179
track
stringclasses
1 value
status
stringclasses
3 values
keywords
stringlengths
2
2.39k
primary_area
stringclasses
21 values
author
stringclasses
501 values
authorids
stringclasses
501 values
aff
stringclasses
1 value
aff_domain
stringclasses
1 value
position
stringclasses
1 value
rating
stringclasses
355 values
confidence
stringlengths
0
19
soundness
stringclasses
642 values
contribution
stringclasses
596 values
presentation
stringclasses
782 values
rating_avg
float64
0
9
confidence_avg
float64
0
5
soundness_avg
float64
0
4
contribution_avg
float64
0
4
presentation_avg
float64
0
4
corr_rating_confidence
float64
-1
1
project
stringclasses
1 value
github
stringclasses
1 value
Review
listlengths
2
10
37mG1vvEKf
ChuLo: Chunk-Level Key Information Representation for Efficient Long Document Processing
main
Active
Long Document Processing;Long Document Classification;Long Document Tagging
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;5;5
4;3;4;4
2;2;3;3
1;2;3;2
3;2;2;2
4
3.75
2.5
2
2.25
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Some questions regarding the evaluation process:\n(1) The results in Table 3's \"All\" setting do not match those in Table 1. Can you explain the reason behind this gap?\n(2) In Table 3, why not compare ChuLo with other baselines used in Table 1?\n(3) Why do GPT4o and Gemini1.5pro only have results on the \"2048\" setting?\n(4) The NER task prompt used in Figure 8 might not be optimal. Please refer to some related research in this area, such as [1].\n2. Although Algorithm 1 provides some details about the keyphrase extraction process, it would be better if more explanations could be added. For example, the meaning of the regex used (for extracting noun phrases), and the effect for the position penalty. Certain notations are unexplained, like $h$ in line 8.\n3. The proposed method has a lot of hyperparameters: $a, b, n, \\alpha, \\gamma$, to name a few. How did you decide the value for them, and what are the values you used?\n4. Do you have any explanations for why RoBERTa underperforms BERT in Table 8?\n5. Why only emphasize the noun phrases instead of emphasizing key sentences that contain facts about the key phrases?\n6. Some minor mistakes: In Algorithm 1's Line 8, $l_k$ should be $l_{k_i}$. In line 216, add a space within \"key phrases\".\n\n[1] Dhananjay Ashok and Zachary Lipton. PromptNER: Prompting For Named Entity Recognition. arXiv preprint arXiv: 2305.15444." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed method introduces a novel combination of unsupervised keyphrase extraction and chunk-based representation, which benefits encoder models for text classification.\n2. The paper presents a thorough empirical evaluation across several datasets, demonstrating clear performance improvements compared to traditional baselines and SoTA API-based models.\n3. The performance analysis across different document lengths provides useful information for similar research in the future." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces ChuLo, a chunk-level key information representation method aimed at enhancing the efficiency and effectiveness of Transformer-based models for long document processing. ChuLo employs unsupervised keyphrase extraction to create semantically meaningful chunks, prioritizing important tokens while reducing input length. The authors argue that this approach better preserves semantic and contextual information compared to existing techniques such as truncation or sparse self-attention. The method is validated on multiple document classification and token classification tasks, showing competitive performance improvements over baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. From the writing perspective, the structure of certain sections is repetitive and confusing. For example, in Section 3.2, the idea that extracting keyphrases is important is repeated multiple times throughout the paragraph. The same idea is repeated in Section 3.4 as well.\n2. The proposed keyphrase extraction method has some strong inductive bias without explanation, like the position penalty, which is neither explained nor verified through ablation studies. I suppose this design assumes that the noun phrases appear earlier in the text are more likely key phrases. The effect of such a design is not discussed and might limit the use case for the proposed method.\n3. There are some doubts regarding the evaluation process. More details in the Questions part." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The method of dividing long documents into manageable chunks is reasonable.\n2. The performance is good particularly in token-level classification." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Chulo, a model that enhances transformer-based approaches for long document-level and token-level classification tasks by effectively integrating chunk-level information. The method of dividing long documents into manageable chunks is reasonable, resulting in good performance especially in token-level classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation of this work—\"… can handle long documents efficiently while retaining all key information from the input…\" (lines 55-56)—appears unaddressed. As I understand, the proposed model maintains the same sequence length as other BERT-like models and integrates additional information, such as chunk-level details with key phrases, which in fact increases computational load. The paper would benefit from a dedicated section thoroughly discussing the motivation of the work, or detailing the method’s potential cost savings (e.g., in terms of FLOPs, model size, etc.).\n\n2. The comparison with LLMs appears unfair, as Chulo is fine-tuned on the downstream dataset. To make the comparison more balanced, it would be beneficial to fine-tune some open-source LLMs, such as LLaMA or Qwen, on the same dataset.\n\n3. The design is not novel; similar to hierarchical-BERT [1], it organizes sentences into chunks.\n\n[1] Lu, J., Henchion, M., Bacher, I. and Namee, B.M., 2021. A sentence-level hierarchical bert model for document classification with limited labelled data. In Discovery Science: 24th International Conference, DS 2021, Halifax, NS, Canada, October 11–13, 2021, Proceedings 24 (pp. 231-241). Springer International Publishing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I would not recommend using the \"long document\" concept here, as many LLMs like LLaMA have already extended the context length to 131k, whereas this paper handles only up to 10k.\n2. The comparison with GPT-4o is commendable; however, how would LLaMA perform on this task if fine-tuned directly? I don't believe this experiment can be avoided.\n3. If comparing with fine-tuned LLMs or GPT models, I would expect the authors to include inference speed comparisons, which might be one of the method's advantages." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. ChuLo outperforms GPT-4o on certain tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces \"ChuLo,\" a chunk-level key information representation method designed to improve long document processing in Transformer-based models. Traditional models face limitations handling extensive texts due to high computational demands, often resulting in information loss from truncation, sparse attention, or simple chunking methods. ChuLo uses unsupervised keyphrase extraction to identify and emphasize core content within each chunk, enhancing document and token classification accuracy without losing critical details. Experimental results demonstrate ChuLo's superior performance across various datasets, especially for lengthy documents, making it a scalable solution for tasks requiring comprehensive text analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of the method is limited, as keyphrase extraction is already widely used.\n2. The title and experiments do not align, as \"Long Document Processing\" has a broader scope than classification.\n3. The baselines used for comparison are somewhat outdated, with most being from 2022 or earlier." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. According to line 72, how does the paper determine the performance of the ChuLo method on other types of NLP tasks?\n2. What are the specific details of the model training process described in the paper?\n3. What are the scientific explanations for the significant performance differences demonstrated by ChuLo in Sections 5.4 and 5.5?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The combination of unsupervised key phrase extraction with chunk representation to improve long document understanding is uncommon in previous research.\n\nQuality: The paper aims to address the practical and significant issue of computational limitations faced by Transformer models when processing long documents. Experimental evaluations conducted on multiple document-level and token-level classification tasks demonstrate the feasibility of the proposed method.\n\nClarity: The paper is structured clearly, with a logical progression from problem statement to methodology, experiments, and conclusions. The detailed description of the SKP algorithm in the paper aids readers in understanding its working principles.\n\nImportance: The proposed ChuLo method enhances the efficiency and performance of long document processing, holding potential for application in long document classification tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a method named ChuLo, designed to address the computational limitations encountered by Transformer-based models when processing long documents. ChuLo extracts key phrases through an improved PromptRank to preserve the core content of the document while reducing the input length. The model is trained using enhanced chunk representations of key information, enabling it to effectively integrate the core semantic content of the document. The paper supports its claims through multiple document-level and token-level classification tasks, providing both qualitative and quantitative analyses. Experimental results demonstrate that ChuLo achieves competitive results across multiple datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The ChuLo method proposed in the paper focuses on long document processing, particularly in document classification tasks. It is stated on line 72 that the contributions of this method include its applicability to various NLP applications, but you have not experimentally confirmed the generalization ability of your method. Therefore, we cannot determine its performance on other types of NLP tasks, such as long document question answering and summarization.\n2. The description of the model training process in the paper is not detailed enough, lacking specific steps of the training. In Section 3.4 of the paper, only the selected model for training is introduced, with no mention of data sources, data processing, optimization algorithms, parameter configurations, or other relevant details.\n3. In Sections 5.4 and 5.5, ChuLo demonstrates significant performance differences compared to existing methods. Therefore, providing scientific explanations for these differences is very important. The lack of analysis of such significant differences in the paper is confusing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024chulo,\ntitle={ChuLo: Chunk-Level Key Information Representation for Efficient Long Document Processing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=37mG1vvEKf},\nnote={under review}\n}" }, "abstract": { "value": "Transformer-based models have achieved remarkable success in various Natural Language Processing (NLP) tasks, yet their ability to handle long documents is constrained by computational limitations. Traditional approaches, such as truncating inputs, sparse self-attention, and chunking, attempt to mitigate these issues, but they often lead to information loss and hinder the model's ability to capture long-range dependencies. In this paper, we introduce ChuLo, a novel chunk representation method for long document classification that addresses these limitations. Our ChuLo groups input tokens using unsupervised keyphrase extraction, emphasizing semantically important keyphrase based chunk to retain core document content while reducing input length. This approach minimizes information loss and improves the efficiency of Transformer-based models. Preserving all tokens in long document understanding, especially token classification tasks, is especially important to ensure that fine-grained annotations, which depend on the entire sequence context, are not lost. We evaluate our method on multiple long document classification tasks and long document token classification tasks, demonstrating its effectiveness through comprehensive qualitative and quantitative analyses." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Long Document Processing", "Long Document Classification", "Long Document Tagging" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5198a2aecd15fa75c07f9e00c2d7651ca77a9fb1.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c2575b7c7023541aebc19568e132ba6f90fe94eb.zip" }, "title": { "value": "ChuLo: Chunk-Level Key Information Representation for Efficient Long Document Processing" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
381rZinzJE
Physics-Informed Autoencoder for Enhancing Data Quality to Improve the Forecasting Reliability of Carbon Dioxide Emissions from Agricultural Fields
main
Active
physics-informed machine learning;autoencoders;gap-fillling;net ecosystem exchange;noise;stochastic differential equation
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;5;6
5;2;2;3
3;2;3;3
1;2;3;3
2;1;1;3
4.25
3
2.75
2.25
1.75
-0.31427
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Thank you for sharing your comprehensive review on the manuscript. We are working on addressing each comment and improving the manuscript accordingly. We will share the updates with you soon.\n\nMeanwhile, we have a question on one of the comments:\n\n_“In section 4.4, it is mentioned that the integration of the SDE in the training of the autoencoder follows previous work [Raissi 2017], but it is not sufficiently described to make the paper self-contained.”_\n\nThe SDE is directly made part of the neural architecture, with the mathematical operators in the SDE are direct nodes in the computation graph. We can view this as a non-trainable physics layer in the neural network architecture. Should we reflect this better in Figure 1?" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Initial Response" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Thank you for sharing your comprehensive review on the manuscript. We are working on addressing each comment and improving the manuscript accordingly. We will share the updates with you soon.\n\nMeanwhile, we have a question on one of the comments:\n_“The SDE formulation in Section 3.2 assumes specific forms for the drift and diffusion terms. The justification for these choices comes from prior work, but the implications of these modelling choices should be discussed. What happens when these assumptions are violated?”_\n\nOur formalization of NEE in terms of an SDE is based on the two papers referred to in line 165 (White & Luo 2008; Weng (2011)), with a drift term and a noise term. According to both of them, the noise term can be defined as a Gaussian process. The drift equations in Equation 5 are derived from the model defined in Equations 1,2 and 3. The parameters inside the drift term are predicted by the decoders in the main PIAE model, described in section 4 later. \nIs your question specifically around the parameters σnight and σday in the noise term and the drift? Do you mean we need to do more precise study on each term? Are you referring to the state-of-the-art assumptions?" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Initial Response" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The SDE formulation in Section 3.2 assumes specific forms for the drift and diffusion terms. The justification for these choices comes from prior work, but the implications of these modeling choices should be discussed. What happens when these assumptions are violated?\n2. The two-phase training procedure using MSE then MMD requires more theoretical grounding:\n 1. Why this specific sequence? How is convergence of the first phase determined before switching to MMD?\n 2. Were other training strategies considered?\n3. The choice of MMD kernels isn't discussed - how sensitive is the method to this choice?\n4. How sensitive is the model to SDE parameter initialization?\n5. What's the computational overhead versus RF/XGBoost?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Introducing a stochastic differential equation for NEE measurements combining daytime\nand nighttime models with Gaussian noise.\n2. Demonstrating that PIAE improves gap-filling robustness compared to state-of-the-art\nmethods, handling gaps from months to years.\n3. Better Maximum Mean Discrepancy (MMD), Wasserstein distance, and Kullback-Leibler (KL) divergence validated significant improvements in NEE distribution learning.\n4. Achieving better fit to NEE measurements validated by lower MAE and higher R2 scores.\n5. Accurately predicting SDE parameters, enhancing interpretability.\n6. Consistent improvement in nighttime predictions across metrics\n7. Strong performance on distribution-based measures (MMD, Wasserstein, KL)\n8. Ability to capture unusual events (e.g., downward NEE spikes)\n9. Effective parameter estimation for both day and night models" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Physics-Informed Autoencoders (PIAEs) to address gaps in CO2 emission measurements from agricultural fields. The method combines autoencoder architectures with physical Net Ecosystem Exchange (NEE) models, integrating equations that describe CO2 exchanges between the atmosphere and carbon pools (i.e., utilizing\nthe SDE defined as a Wiener process). Their main contribution is extending standard autoencoders with a stochastic differential equation framework that models NEE changes over time, particularly addressing nighttime measurement gaps. Their method also provides forecasting capabilities and enhances performance on NEE gap-filling by accurately learning the NEE distribution and associated parameters. They evaluate their approach on 8 years of flux tower data from East Anglia, showing improvements over current state-of-the-art methods, especially for nighttime predictions, where they achieve a 22% higher R2 score than Random Forest approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the supplementary material adequately explains the SDE derivation and diffusion coefficient determination, key points should be summarized in the main text. A brief note about how σnight and σday are derived from empirical error distributions would help readers understand the transition from Eq. 5 to 6 without requiring supplementary material consultation\n2. AE is better than PIAE for all model parameter estimation across all metrics, contrary to their claim that their method enhances performance on NEE gap-filling by accurately learning the NEE distribution and associated parameters.\n3. The computational requirements compared to simpler approaches like RF are not discussed. \n4. The two-phase training procedure (MSE then MMD) has no convergence guarantees.\n5. The claimed 22% improvement in R2 score lacks context - no variance was reported (Error bars or confidence intervals for the reported metrics would help). The hyperparameter selection process for PIAE and baseline models (including random forest) is not described. A fair comparison requires careful tuning of all methods.\n6. Missing critical details:\n 1. How were hyperparameters selected for PIAE and baselines?\n 2. What are the network architectures (layer sizes, activation functions)?\n 3. Where are the error bars and statistical significance tests?\n 4. How does computational cost compare to simpler methods\n7. The implementation details are insufficient for the reproduction\n8. The comparisons in Figures 2 and 3 show selective periods without justification for their choice\n\nMinor comments:\n1. Section 4.5's description of the loss function uses inconsistent notation compared to earlier sections. \n2. There are some writing clarity issues, like in lines 50 and 98.\n3. The paper shows results across different timescales but doesn't systematically evaluate performance as a function of gap length. This would be valuable for understanding the method's practical utility.\n4. The NEE parameter estimation details might fit better in methods" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Could the work be applied to other physical systems? Would that require knowledge of the DFE governing the system?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper applies ML techniques to enhance NEE measurements, which has the potential to improve the estimation of Co2 emissions, resulting in reduced uncertainty in our projections. **This is an important problem with high societal and environmental impact.**" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the application of autoencoders for the problem of imputing missing values in Co2 Net Ecosystem Exchange (NEE) measurements. The autoencoder takes in several covariates, such as temperatures and radiations, at a given timestep $t$ and predicts the next-step NEE, along with several variables of a Stochastic Differential Equation that models changes in NEE." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **The presentation of the paper is convoluted, and requires a degree of familiarity with the NEE problem that is uncommon in the ICLR community**. It is then hard for me to judge the significance, originality and potential impact of the work.\n More precisely, these are some points that are not sufficiently explained or that make the paper hard to read and understand:\n- In the introduction, it is mentioned that missing NEE values are due to e.g. power shortages. I assume that in such scenarios, the values of the covariates (temperature, radiation, etc) are also missing due to the same issue. However, the proposed model requires having access to all covariate values at a given time. How can the model be applied in practice without these values?\n- In the introduction, the first highlighted contribution is the introduction of a SDE for NEE measurements. Put it that way, it sounds like the SDE is novel also in the physics. However this point is not stressed again later on, so I wonder whether the SDE is known and the novelty is in its use as supervision for learning ML models.\n- Line 157, it is mentioned that the $E_0$ parameter is estimated with the nighttime model and used in the daytime one, but it is not explained why.\n- In section 4.4, it is mentioned that the integration of the SDE in the training of the autoencoder follows previous work [Raissi 2017], but it is not sufficiently described to make the paper self-contained.\n- The related work is not sufficiently described. In particular, it is not clear whether the reported baselines RFR and XgBoost variant based on the work of [Moffat 2007] are also physics-informed or only statistical.\n- Second and third lines of Equation 5: do the second (from the left) commas separate two different definitions or do they indicate the continuation of the variable suffices?\n\n2. The tables do not report standard errors, which makes impossible to judge the significance of the improvements.\n\n3. The paper does not discuss limitations nor future work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Why is latent heat (L) excluded? Having a high correlation to the target variable would seem to be a good thing when the goal is to predict missing values?\n\nThe notation in equation 4-5 is difficult to read. Would it not be more clear to write this in terms of partial derivatives?\n\nFor improved readability, consider to use italics for variables and roman (upright) type for named functions, as subscripts in equations, and for units of measurement. Consider that multi-letter abbreviations can be confusing in equations: For example, it can be unclear if rb is a single variable or the product of r and b.\n\nrb (night/day) is not defined in the main text as far as I can see. rb is mentioned in the text in the appendix but not in the mathematical derivations.\n\nIn equation 9, should is there not a difference between dt on the left and right hand side? On the left hand side, it seems to denote an infinitessimal element, and on the right side it is 30 minutes?\n\nI am not sure how this approach is an autoencoder. As I understand the written description, the model predicts one timestep ahead with a latent encoding, and thus does forecasting rather than reconstruction. However, Figure 1 does seem to imply that the decoders predict for the same timestep.\n\nIs there something wrong with the linebreaks in Algorithm 1, step 4?\n\nWhat is the reason for the choice of the two loss phases?\n\nI am not familiar with the literature on physics-informed autoencoders, but I would like to ask whether this paper introduces any technical contributions to the framework itself, or if the contribution is primarily the application of an existing modeling framework to significant applications." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed method seems a good fit to the application." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the problem of forecasting CO2 emission from agricultural fields based on measurement data. In particular, the problem of predicting missing data is addressed. The authors present a set of stochastic differential equations that govern the net ecosystem exchange (NEE) that are used in a physics-informed autoencoder for data imputation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper was challenging to follow, primarily due to unclear notation and insufficient definitions of certain terms. Additionally, some design choices, such as the two-phase loss, are described but lack clear justification.\n\nThe main contribution appears to be a relatively straightforward application of an existing methodology to a specific domain. The novelty largely lies in application-specific details, which may not align closely with the primary interests of the ICLR community." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The use of MMD seems a little bit odd. MMD is essentially a two-sample statistical test to identify of those samples are from the same probability distribution. We usually expect the two samples are from two independent realizations. But, based on the loss function, two samples are from the same realization, just one is the data and the other is a model prediction. If they are from the same realization ($\\omega^j$ in author's notation), minimizing the distance would make more sense, like the first phase of the training. In the end of day, for two samples from the same realization, minimizing MMD corresponds to minimizing MSE. But, all the hyperparameters (like the RBF kernel) makes it much less straightforward." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "It is shown that the proposed method outperformed some of the baseline methods that are used in the domain. It seems to suggest a potential of replacing the conventional machine learning models with the PINN models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this study, the authors utilized physics-informed neural network framework (PINN) to develop an auto-encoder for the forecasting of carbon dioxide emission. First, the overall physical process is modeled by using a set of ordinary differential equations and PINN is used to train a neural network. The authors proposed a two-stage training method, that in the first stage the neural network is trained by minimizing the mean absolute error and, then, in the second phase, the maximum mean discrepancy score is minimized. It is shown that the proposed method outperforms some of the naive baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "First of all, the study is mainly focused on the application of the widely used PINN to a physical process for a specific domain. It does not look like there are novel algorithms or problem setups that can be of interest to a broader machine learning community. I would like to suggest the authors to submit this manuscript to a more domain specific venue.\n\nThe paper is not very well written. It is unclear how the SDE formulation is treated in the modeling, how the SDE and model are used for uncertainty quantification, how the evaluations were made by using what variables as inputs and predict how long in the future, and so on. I assume that this is due to the page limitation. It would have been better if the authors had put all the domain specific modeling sections in the appendix and focused more on the generic problem set up in the main body." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a Physics-Informed Autoencoder in order to fill the gaps and improve the training of the Net Ecosystem Exchange measurements at farm scale." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024physicsinformed,\ntitle={Physics-Informed Autoencoder for Enhancing Data Quality to Improve the Forecasting Reliability of Carbon Dioxide Emissions from Agricultural Fields},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=381rZinzJE},\nnote={under review}\n}" }, "abstract": { "value": "Missing values in measurements for carbon dioxide emissions on drained peatlands remains an open challenge for training forecasting techniques to achieve net zero. At the field scale, existing methods struggle to model $\\ce{CO_2}$ emissions to fill gaps, especially in nighttime measurements. We propose robust Physics-Informed Autoencoders (PIAEs), which combine the generative capabilities of Autoencoders with the reliability of physical models of Net Ecosystem Exchange (NEE) that quantify $\\ce{CO_2}$ exchanges between the atmosphere and major carbon pools. Our method integrates equations describing the physical processes and associated uncertainties to fill gaps in NEE measurements from eddy covariance (EC) flux towers. In the PIAE, various sensor measurements are encoded into the latent space, and a set of decoders is then used to approximate the ecosystem parameters and the optimal NEE forecast, directed by dynamics described by a stochastic differential equation. These decoders utilize nighttime and daytime NEE models that describe carbon transfer as a Wiener process. Finally, we use a two-phased training routine with two loss functions describing each phase: Mean Squared Error (MSE) and Maximum Mean Discrepancy (MMD) between the measurements and the reconstructed samples. PIAE outperforms the current state-of-the-art Random Forest Robust on the prediction of nighttime NEE measurements on various distribution-based and data-fitting metrics. We present significant improvement in capturing temporal trends in the NEE at daily, weekly, monthly and quarterly scales." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "physics-informed machine learning", "autoencoders", "gap-fillling", "net ecosystem exchange", "noise", "stochastic differential equation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3bc3a2bd321ccef5a0204682c5aff4f38b0071e0.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/8007fd71add63cb21f09410c31255e4f7d82850c.pdf" }, "title": { "value": "Physics-Informed Autoencoder for Enhancing Data Quality to Improve the Forecasting Reliability of Carbon Dioxide Emissions from Agricultural Fields" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
385gQZuuuR
Consistency Diffusion Models for Singel-Image 3D Reconstruction with priors
main
Withdraw
Bound;Variational Bayesian;3D Point Cloud;Single-Image;Reconstruction
generative models
Chenru Jiang;Chengrui Zhang;Xi Yang;Jie Sun;Kaizhu Huang
~Chenru_Jiang1;~Chengrui_Zhang1;~Xi_Yang1;~Jie_Sun6;~Kaizhu_Huang1
3;5;5;6
5;3;3;4
2;3;3;3
2;2;2;3
3;2;3;3
4.75
3.75
2.75
2.25
2.75
-0.622543
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.This paper is easy to follow and well-written.\n2.The paper introduces the Consistency Diffusion Model, which incorporates 3D structural priors as constraints within a diffusion framework. The topic is interesting.\n3.The model employs a Bayesian framework, incorporating a new constraint term that introduces 3D structural priors into the variational process. This improves the model's consistency by raising the Evidence Lower Bound (ELBO), reducing uncertainty, and enhancing overall stability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel Consistency Diffusion Model (CDM) designed to improve single-image 3D reconstruction. The proposed model leverages both 2D and 3D prior information to enhance the consistency of the reconstruction process, yielding promising results in single-view scenarios. Experimental evaluations demonstrate that CDM outperforms existing methods in certain cases, underscoring its potential." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The inclusion of multiple priors, complex rotation matrices, and depth mapping computations increases the computational burden during training. There is a lack of detailed information on the training time and computational efficiency.\n2.Additional experiments incorporating different types of 3D structural priors and 2D image priors, as well as testing on a broader range of datasets, would help to validate the model’s generalizability and robustness across diverse conditions.\n3.The paper notes that inconsistent conditions between the training and sampling phases can lead to \"model drift,\" causing learning biases and unstable results. This could result in a performance gap between the training and deployment phases, affecting the model's real-world reliability. However, potential methods for mitigating or addressing the issue of model drift are not discussed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Few questions that the reviewer has for the paper:\n1. How does the model perform on non-object centric scenes? (i.e, more extrapolating views)\n2. How are random camera parameters sampled? How did the authors make sure that there is no bias brought in on the sampling procedure of the camera parameters?\n\nComment: \nTypo in Figure 2 (a)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Few strength of the papers that the reviewer appreciates are:\n1. The paper is fairly well written and easy to follow. \n2. The contributions are clearly isolated, from 2D and 3D, making it easy to justify the quality of each contributions in the ablation in the tables, e.g, Table 4\n3. Numerous technical ablations conducted to demonstrate how each components and their respective small deviations work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new way to generate 3D point clouds from a single image. The main contribution of the paper is to use 2D and 3D priors as a way to nudge diffusion models for robustness in the ill-posed problem domain of single view 3D point cloud estimation. \n2D priors are extracted from DINOv2, as depth and contour (as well as features). 3D priors are extracted as random camera transformations around an object of interest. \nThe results demonstrate that the method can outperform existing methods for single view point cloud estimation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Main weakness of the paper is that the contributions are not as well novel. Usage of 2D features and derivatives such as depth and contours are easily justifiable. However, because it is so clear and evident that use of depth and contour as a way to regularize 3D point cloud reconstruction will help, the reviewer does not find it as fundamental contribution to the community.\nIn other words, yes we know that the usage of depth and contours will help, and yes the paper has re-verified it. What leaves the takeaways? The reviewer is unsure if usage of these priors as a form of augmentations are worthy of contributions to the ICLR venue. \n\nIn addition, other contributions, such as usage of consistency in the diffusion process is not new; it may be new in the realm of point cloud diffusion, but it would be application of existing approaches." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tCould the authors provide a theoretical basis for enforcing consistency terms across all diffusion steps, rather than focusing on key steps where 3D structure becomes clearer?\n\t2.\tHow does the addition of consistency terms impact the model’s ability to generate diverse outputs? Is there a measurable loss in generative variability?\n\t3.\tCould the authors elaborate on why 3D priors, rather than other forms of regularization, are necessary for improving consistency?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper does present an effort to enhance consistency in 3D shape reconstruction by combining 2D and 3D priors within a Bayesian framework. While the theoretical foundation is weak, the authors have demonstrated some commitment to experimenting with consistency terms, attempting to tackle an important challenge in generative modeling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the Consistency Diffusion Model (CDM) for 3D point cloud reconstruction from a single image, integrating both 2D and 3D priors to enhance consistency in the Bayesian framework. By converting 3D structural priors into a bound term, the approach aims to increase evidence in the variational Bayesian framework, and 2D priors from the input image are projected onto the 3D point cloud for additional guidance. The authors claim this framework offers a consistency advantage over existing methods and improves performance on synthetic and real-world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tInsufficient Justification for Consistency Terms: The paper lacks a solid theoretical foundation for enforcing consistency terms at each diffusion step, which may not align with the iterative nature of the diffusion process. This raises concerns about the model’s validity.\n\t2.\tImpact on Variance of Generative Model: Enforcing consistency terms across all steps could reduce the model’s generative diversity, possibly resulting in outputs that lack the variability expected in a robust generative framework. This could push the model towards a U-Net-like structure, potentially sacrificing the inherent variability necessary for effective 3D generation.\n\t3.\tExperimental Limitations: The experiments do not convincingly demonstrate that this approach generalizes well. The benchmarks and comparative studies are limited, and it is unclear if the performance improvements observed are due to the proposed model’s consistency terms or other factors.\n4. I believe equation 5 is incorrect. The variational bound term should have the joint probability p(x_0 : x_t) as the numerator." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.How is the camera matrix selected? If only random sampling is used, the images rendered from adjacent views will be very similar.\n\n2.Please refer to the questions and suggestions in the Weaknesses part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.This paper integrates 3D and 2D priors to the reconstruction task. The results achieve SOTA performance in ShapeNet and Co3D dataset." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to reconstruct point cloud from a single image. First, they convert 3D structural priors derived from the initial 3D point cloud as a bound term to increase evidence in the variational Bayesian framework. Second, they extract and incorporate 2D priors from the single input image, projecting them onto the 3D point cloud to enrich the guidance for diffusion training. The results show SOTA performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The paper introduces 3D priors constraint to refine the reverse process. Does this part increase the training time of the model? It is better to give a comparison of the training time of the model with and without the 3D prior.\n\n2.Visual comparison is not sufficient. Only three samples were selected in the qualitative experiment of the Shapenet dataset (Figure 5), and the differences are not quite visible in all the samples except for the middle sample where CDM showed advantages. Besides, the advantage of CDM over PC2 is not apparent from the Visual comparison on the Co3D dataset in Figure 6.\n\n3.The depth map rendered from the point cloud is a random sample of the 3D geometry, and a lot of information is lost in the sampling process. It is better to directly adopt 3D geometric representation such as SDF as 3D prior.\n\n4.The paper lacks sufficient research and comparison on relevant methods. There are a large number of methods that can reconstruct point clouds from a single image with good results. For example, TriplaneGaussian[1] can generate multi-view rendering results in addition to point clouds; Michelangelo[2] can generate point clouds corresponding to text and images; and CLAY[3] trained a large model for point cloud generation. The ability of these methods to generate point clouds from images is not limited to a few single categories, and they have good generalization ability. These methods should be discussed and compared in the paper.\n\n[1] Zou Z X, Yu Z, Guo Y C, et al. Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 10324-10335.\n\n[2] Zhao Z, Liu W, Chen X, et al. Michelangelo: Conditional 3d shape generation based on shape-image-text aligned latent representation[J]. Advances in Neural Information Processing Systems, 2024, 36.\n\n[3] Zhang L, Wang Z, Zhang Q, et al. CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets[J]. ACM Transactions on Graphics (TOG), 2024, 43(4): 1-20." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Consistency Diffusion Models for Singel-Image 3D Reconstruction" }, "_bibtex": { "value": "@misc{\njiang2024consistency,\ntitle={Consistency Diffusion Models for Singel-Image 3D Reconstruction with priors},\nauthor={Chenru Jiang and Chengrui Zhang and Xi Yang and Jie Sun and Kaizhu Huang},\nyear={2024},\nurl={https://openreview.net/forum?id=385gQZuuuR}\n}" }, "abstract": { "value": "This paper delves into the study of 3D point cloud reconstruction from a single image. Our objective is to develop the Consistency Diffusion Model, exploring synergistic 2D and 3D priors in the Bayesian framework to ensure superior consistency in the reconstruction process, a challenging yet critical requirement in this field. Specifically, we introduce a pioneering training framework under diffusion models that brings two key innovations. First, we convert 3D structural priors derived from the initial 3D point cloud as a bound term to increase evidence in the variational Bayesian framework, leveraging these robust intrinsic priors to tightly govern the diffusion training process and bolster consistency in reconstruction. Second, we extract and incorporate 2D priors from the single input image, projecting them onto the 3D point cloud to enrich the guidance for diffusion training. Our framework not only sidesteps potential model learning shifts that may arise from directly imposing additional constraints during training but also precisely transposes the 2D priors into the 3D domain. Extensive experimental evaluations reveal that our approach sets new benchmarks in both synthetic and real-world datasets. The code will be released." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Chenru_Jiang1", "~Chengrui_Zhang1", "~Xi_Yang1", "~Jie_Sun6", "~Kaizhu_Huang1" ] }, "authors": { "value": [ "Chenru Jiang", "Chengrui Zhang", "Xi Yang", "Jie Sun", "Kaizhu Huang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Bound", "Variational Bayesian", "3D Point Cloud", "Single-Image", "Reconstruction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "jiang|consistency_diffusion_models_for_singelimage_3d_reconstruction_with_priors" }, "pdf": { "value": "/pdf/5e1dc3f92af19c8f24cf19a60c737b897228bb32.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/8c855e79f390b08aea7dc361464218cdee783eaf.zip" }, "title": { "value": "Consistency Diffusion Models for Singel-Image 3D Reconstruction with priors" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
38BBWrXUhP
Revisiting a Design Choice in Gradient Temporal Difference Learning
main
Active
gradient temporal difference learning
reinforcement learning
5;5;6
3;3;3
3;3;3
3;2;2
3;3;2
5.333333
3
3
2.333333
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Is it necessary to take an increasing function f(t)? Other than removing the dependence between data at step t and t+f(t), is there any other reason to use an increasing function?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The problem to tackle is well stated, which is to stabilize off-policy learning and improve the previous algorithm ATD and GTD. Also, the paper is clearly written and easy to follow, with rigorously stated assumptions and lemmas." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper aims to develop a convergent algorithm for off-policy temporal difference learning under linear function approximation. The proposed algorithm directly minimizes the L2-norm of expected TD updates (NEU) $||Aw+b||^2$, improving the memory requirement of estimating matrix $A$ from the previous ATD algorithm. Meanwhile, the proposed algorithm reduces the number of learning rates from two to one compared to GTD, another NEU minimization algorithm. It maintains the convergent property with a convergent rate $\\tilde{O}(1/t)$. Moreover, the algorithm is tested on Baird’s counterexample and is shown to avoid divergence in the deadly triad." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The approach needs to be more motivated. GTD is known to suffer from a low convergent rate compared to TD. However, the proposed algorithm does not handle this key problem. More experiments to compare the convergence speed and some intuition on why the proposed algorithm can fasten the learning would be great.\n\nAlso, the algorithm needs to fit better into the literature. ETD, introduced by Mahmood and colleagues (2015), is another stable off-policy algorithm. Also, a target network is suggested to help convergence (Zhang et al., 2021; Fellows et al., 2023; Che et al., 2024). Che et al. (2024) compare their TD algorithm with GTD on Baird’s counterexample, showing much faster convergence.\n\nReference\n\nMahmood, A. R., Yu, H., White, M., & Sutton, R. S. (2015). Emphatic temporal-difference learning. arXiv preprint arXiv:1507.01569.\n\nZhang, S., Yao, H., & Whiteson, S. (2021, July). Breaking the deadly triad with a target network. In International Conference on Machine Learning (pp. 12621-12631). PMLR.\n\nFellows, M., Smith, M. J., & Whiteson, S. (2023, July). Why target networks stabilise temporal difference methods. In International Conference on Machine Learning (pp. 9886-9909). PMLR.\n\nChe, F., Xiao, C., Mei, J., Dai, B., Gummadi, R., Ramirez, O. A., ... & Schuurmans, D. (2024). Target Networks and Over-parameterization Stabilize Off-policy Bootstrapping with Function Approximation. arXiv preprint arXiv:2405.21043." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Here are a few questions that might affect the evaluation:\n1. Do the choices of $f(t)$ depend on the induced Markov chain’s mixing time? If yes, where is it in the result? If not, why?\n2. What is the convergence rate for GTD? Is it the same as the canonical on-policy TD as well?\n3. In Line 182, when $f(t)$ was a constant function, what would happen by following the classical convergence results mentioned in Line 186? What’s the consequence of using such a $f(t)$?\n4. In Line 227, the paper claims the technique of using a variable interval $T_m$ has not been used in any stochastic approximation and RL literature. Has it been used or studied in other fields? If yes, then it is worth mentioning the relevant literature.\n5. In Line 482, the paper claims that the finite sample analysis **confirms** that the proposed algorithm converges reasonably fast, but this analysis is based on a variant of the proposed algorithm with a projection step. Does the efficiency of the variant with the projection step guarantee or generally suggest the efficiency of the base algorithm?\n6. Is the proposed algorithm more or less sensitive to the learning rate compared to GTD? Since GTD has two hyperparameters, comparison methods like those in Ghiassian & Sutton, 2021 might be useful here." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The strengths of the paper include its originality, quality, and clarity:\n1. The idea in this paper is novel to the best of my knowledge. It’s neat to use two samples distanced away from each other to estimate the terms involving two $A$ matrices, which are independent if their gap is large. The sublinear memory requirement also renders this idea a practical approach.\n2. The quality of the paper is also a strength. The asymptotic convergence of the proposed algorithm is novel and may bring value to the community.\n3. The paper is very well written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new variant of the gradient temporal difference (GTD) learning algorithm for online off-policy policy evaluation with linear function approximation. The idea is to use two samples distanced away from each other to address the double sampling issue encountered during the derivation of GTD. The paper shows that when the distance between the two samples $f(t)$ used to estimate the gradient increases with a proper rate (e.g., $f(t)=\\ln^2(t+1)$), the new algorithm converges asymptotically, while its variant with a projection operator has a convergent rate comparable to on-policy TD. The consequence of this new GTD variant is that 1) it reduces the need for an additional set of weights and step size, and 2) it requires an additional memory of size $O(\\ln^2(t))$. Preliminary experiment results on Baird’s counterexample show the effectiveness of the proposed algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has weaknesses in its significance and relevant work discussion:\n1. The paper may be limited in its significance. \n - On the theory side, the finite time analysis is based on a variant of the proposed algorithm with a projection step, which is absent in the actual algorithm. Thus, the comparison between its convergence rate in this case with that of on-policy TD may not be very valuable. Note that finite sample analysis of the actual algorithm is possible, as also pointed out in the paper. Obtaining such a result can strengthen the paper.\n - On the empirical side, the experiments presented in this paper only focus on Baird’s counterexample and are rather limited. Having more experiments, even in simple environments like FourRoom (Ghiassian & Sutton, 2021; Ghiassian et al., 2024), would help strengthen the claim that the proposed algorithm is effective. In addition to testing the proposed algorithm in environments like FourRoom, a comparison with other GTD algorithms can also make the paper stronger. Other researchers may find it useful to know how the proposed algorithm compares to others in terms of sample efficiency, stability, and hyperparameter sensitivity.\n2. The paper also lacks a thorough related work discussion on off-policy policy evaluation (OPPE) with linear function approximation. There have been many follow-up works on the GTD algorithm (Mahadevan et al., 2014; Ghiassian et al., 2020; Yao, 2023). While some of them are cited in the paper, the relationship between these later ideas building on GTD and the proposed method is not thoroughly discussed, which could be useful and inspire future research. Note that Yao (2023) also introduces a GTD variant with one step-size, so it may be necessary to clarify its distinction with your approach. In addition, the paper may also benefit from discussing another line of work that addresses the deadly triad, the ETD family (Sutton et al., 2016; Hallak et al., 2016, 2017; He et al., 2023). Specifically, how does the proposed approach compare to these methods in terms of the optimality of the fixed point and the convergence property? Having a more thorough discussion of these relevant works would strengthen the paper’s positioning.\n\nGhiassian, S., Patterson, A., Garg, S., Gupta, D., White, A., & White, M. (2020). Gradient temporal-difference learning with regularized corrections. ICML.\n\nGhiassian, S., & Sutton, R. S. (2021). An empirical comparison of off-policy prediction learning algorithms in the four rooms environment. arXiv preprint arXiv:2109.05110.\n\nGhiassian, S., Rafiee, B., & Sutton, R. S. (2024). Off-Policy Prediction Learning: An Empirical Study of Online Algorithms. IEEE Transactions on Neural Networks and Learning Systems.\n\nHallak, A., Tamar, A., Munos, R., & Mannor, S. (2016). Generalized emphatic temporal difference learning: Bias-variance analysis. AAAI.\n\nHallak, A., & Mannor, S. (2017). Consistent on-line off-policy evaluation. ICML.\n\nHe, J., Che, F., Wan, Y., & Mahmood, A. R. (2023). Loosely consistent emphatic temporal-difference learning. UAI.\n\nMahadevan, S., Liu, B., Thomas, P., Dabney, W., Giguere, S., Jacek, N., ... & Liu, J. (2014). Proximal reinforcement learning: A new theory of sequential decision making in primal-dual spaces. arXiv preprint arXiv:1405.6757.\n\nSutton, R. S., Mahmood, A. R., & White, M. (2016). An emphatic approach to the problem of off-policy temporal-difference learning. JMLR.\n\nYao, H. (2023). A new Gradient TD Algorithm with only One Step-size: Convergence Rate Analysis using $ L $-$\\lambda $ Smoothness. arXiv preprint arXiv:2307.15892." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see my comments in Weaknesses section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper expand the idea from $A^TTD$ and proposed a new method to solve double sampling issue in off-policy learning. Compared with $A^TTD$, this methods required less memory. The authors also provided the convergence analysis of their method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a new solution to solve double sampling issue in off-policy reinforcement learning. Specifically, this paper provided another method to estimate $A^T A$ and $A^T b$ by introducing a function $f(t)$. The authors also provided finite-time analysis for their method as well as some numerical results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper is really interesting to me. However, I have several questions. \n\n1. The advantage of selecting $f(t)$ as an increasing function over a constant one is not immediately clear. The authors state in lines 186–187 that classical convergence analysis can be applied to establish the convergence rate. Thus, it seems that a constant $f(t)$ could also ensure convergence. Additionally, the experimental results suggest that setting $f(t)=2$ is sufficient to resolve Baird’s counterexample, which further supports the idea of choosing $f(t)$ as a constant.\n\n2. Relying on samples from several steps prior may introduce additional errors during policy improvement. Although this paper focuses exclusively on policy evaluation, it is important to mark that policy evaluation serves the purpose of policy improvement. In cases where the policy is continuously updated, the samples used to estimate $A^T$ may become inaccurate, introducing further errors.\n\n3. A minor issue appears in lines 206–207, where I believe the correct notation should be $\\bar{h}(\\omega_{t_m})$ instead of $h(\\omega_{t_m})$" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting a Design Choice in Gradient Temporal Difference Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=38BBWrXUhP},\nnote={under review}\n}" }, "abstract": { "value": "Off-policy learning enables a reinforcement learning (RL) agent to reason counterfactually about policies that are not executed and is one of the most important ideas in RL. It, however, can lead to instability when combined with function approximation and bootstrapping, two arguably indispensable ingredients for large-scale reinforcement learning. This is the notorious deadly triad. The seminal work Sutton et al. (2008) pioneers Gradient Temporal Difference learning (GTD) as the first solution to the deadly triad, which has enjoyed massive success thereafter. During the derivation of GTD, some intermediate algorithm, called $A^\\top$TD, was invented but soon deemed inferior. In this paper, we revisit this $A^\\top$TD and prove that a variant of $A^\\top$TD, called $A_t^\\top$TD, is also an effective solution to the deadly triad. Furthermore, this $A_t^\\top$TD only needs one set of parameters and one learning rate. By contrast, GTD has two sets of parameters and two learning rates, making it hard to tune in practice. We provide both asymptotic and finite sample analysis for $A^\\top_t$TD, where the convergence rate is on par with the canonical on-policy temporal difference learning. Key to our analysis is a novel refined discretization of limiting ODEs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "gradient temporal difference learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bc43654a3f2154c6f73b119ba3e3b9ce49ed7aff.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Revisiting a Design Choice in Gradient Temporal Difference Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
38No4B8sx6
Refining CLIP's Spatial Awareness: A Visual-Centric Perspective
main
Active
Self-distillation; CLIP; Open-vocabulary dense prediction
unsupervised, self-supervised, semi-supervised, and supervised representation learning
6;6;6
4;5;4
3;3;3
2;3;3
2;3;3
6
4.333333
3
2.666667
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Motivated by the observation that RLA methods suffer from notable loss in spatial awareness for CLIP ViTs, SCD is specifically designed to capture the spatial structure in a region. The widespread experiments show the superior performance of the proposed method across multiple open-vocabulary dense prediction benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to improve upon the Region-Language Alignment (RLA) approaches for Contrastive Language-Image Pre-training (CLIP) models. In order to not only promote the linguistic alignment but also preserve the spatial awareness, Spatial Correlation Distillation (SCD) is proposed to plug into the existing methods such as RegionCLIP and CLIPSelf. Refiner is also introduced to enhance the regional representation of teacher model. The experiments on the open-vocabulary dense prediction tasks demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The motivation to design Refiner has been implied in CLIPSelf. When K = N, the proposed Refiner is almost the same as CLIPSelf, which reduces the technical novelty. Also, the ablation study of K is absent in this work.\n\nThe application of the proposed method is restricted. Spatial Correlation Distillation (SCD) is auxiliary and is to preserve the spatial awareness when another optimization is applied (e.g. RLA). Therefore, it seems that SCD cannot be applied independently since in this case the weights of teacher model and student model are always equal. Besides, R-SC-V is only learned from the teacher model that is optimized by Refiner (similar to CLIPSelf), so it cannot be further applied to CLIPSelf based approach.\n\nIn order to showcase the generalizability and scalability of the proposed method, the experiments with data scaling up are expected to be provided, which is missing in the current version." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In the SC-RLA, how to ensure the correspondence between the RoI regions from teacher and student models? The student model would be trained, so the number, order and attribute of the RoI regions would become different with the frozen image encoder.\n\n2. In the line 242 -258, would the contents of embeded sampled images {X_i} affect the SPATIAL AWARENESS? Could you provide some quantitative analysis like the cosine similarity between the {X_i} and X_t ?\n\n3. Since the motivation and module is to improve spatial awareness (which can also be seen from the visualization), are there more segmentation related visualizations? Qualitative results using segmentation would be more convincing (e.g. finer edges)" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Experiments. The experiments are sufficent and solied to prove the improvement of framework.\n\n2. Significance. The framework is a convience, plug and play, and effecitive pipeline to improve the existing methods of the OV tasks.\n\n3. Motivation. The motivation is clear. The Fig.1 and intro explain the loss of dense prediction ability of RLA process." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework called Spatial Correlation Distillation(SCD) to refine the spatial awareness of CLIP. To recall the visual perception of vanilla CLIP loss in Region-Language Alignment (RLA) training, it proposes a Spatial Correlation Distillation training process and a light-weight refiner to distill the visual relationship from the vanilla CLIP. SCD can improve the existing OV methods and achieve the new sota on the ov tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Novelty. The frame work is composed with SCD and a refiner and the novelty of the SCD is limited. The SCD is a common distillation module to reserve the similarity between the RoI features of student and teacher models. \n\n2. Lack of analysis. This paper does not provide a deeper analysis of the experimental results. For example, why R-SC can effectively improve the classification accuracy in Tab.1? In terms of motivation and structure, this module improves spatial perception and does not have much gain in classified task with the annotated masks..\n\n3. Typo. What is the Fig.2.o in the line 193?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. It might be better to use \"language\" or \"text\" instead of \"linguistic\"\n\n2. How come methods specifically designed for CLIP dense prediction like CLIPSelf and RegionText work even worse than vanilla CLIP?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This pager reveals the potential problems of previous works, that \"the RLA process projects dense visual embeddings into a text-oriented domain, making them incompatible with visual-centric objectives\". To tackle this, the paper proposes to conduct Spatial-Correlation-guided Region-Language Alignment with Refiner to preserve spatial awareness. This design is novel and reasonable.\n\n2. The performance improvement is significant compared with baseline methods.\n\n3. The experiments and analysis are comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to enhance CLIP's spatial awareness. That is to say, increases the quality of dense features extracted by CLIP. It proposes the Spatial Correlation Distillation (SCD) framework, which preserves CLIP's inherent spatial structure and mitigates degradation for spatial awareness by Region-Languaeg Alignment. It also introduces a lightweight Refiner that extracts refined correlations directly from CLIP before feeding them into SCD, based on an intriguing finding that CLIP naturally captures high-quality dense features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper writing is not so rigorous. Authors should give clear definitions of each term they are discussing. Like what is the definition of \"spatial awareness\", what it means by a better dense feature, what is \"visual-centric\", what is \"intra-feature structural relationships\", etc.\n\nAs an example, spatial awareness here is (probably) defined as performance for tasks like localization and recognition, which I think, is equivalent to increasing the quality of dense features extracted by CLIP, which means different parts in the image with different semantics should be extracted with the features that are distinguishable from each other." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We refine CLIP’s region-language alignment by enhancing its spatial awareness, improving performance from both visual-centric and vision-language perspectives." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024refining,\ntitle={Refining {CLIP}'s Spatial Awareness: A Visual-Centric Perspective},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=38No4B8sx6},\nnote={under review}\n}" }, "abstract": { "value": "Contrastive Language-Image Pre-training (CLIP) excels in global alignment with language but exhibits limited sensitivity to spatial information, leading to strong performance in zero-shot classification tasks but underperformance in tasks requiring precise spatial understanding. Recent approaches have introduced Region-Language Alignment (RLA) to enhance CLIP's performance in dense multimodal tasks by aligning regional visual representations with corresponding text inputs. However, we find that CLIP ViTs fine-tuned with RLA suffer from notable loss in spatial awareness, which is crucial for dense prediction tasks. To address this, we propose the Spatial Correlation Distillation (SCD) framework, which preserves CLIP's inherent spatial structure and mitigates above degradation. To further enhance spatial correlations, we introduce a lightweight Refiner that extracts refined correlations directly from CLIP before feeding them into SCD, based on an intriguring finding that CLIP naturally capture high-quality dense features. Together, these components form a robust distillation framework that enables CLIP ViTs to integrate both visual-language and visual-centric improvements, achieving state-of-the-art results across various open-vocabulary dense prediction benchmarks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Self-distillation; CLIP; Open-vocabulary dense prediction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a5a579b871219820f18d259dcedc27c48b62edeb.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Refining CLIP's Spatial Awareness: A Visual-Centric Perspective" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
38hLpTVpe7
Teaching Transformers Modular Arithmetic at Scale
main
Active
transformers;modular arithmetic;math
other topics in machine learning (i.e., none of the above)
3;3;5;5
3;4;3;4
3;3;4;3
2;1;2;2
3;3;3;3
4
3.5
3.25
1.75
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I've mentioned my questions in the weaknesses section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is mostly focused on experimental evaluation of different training strategies, and their experiments are well-detailed and reproducible. \n* The methodology proposed is well presented and easy to understand and follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a few techniques that promote faster convergence in learning modular addition with encoder-only transformers. The techniques include a slight modification of loss function, angular embedding of inputs and modifications of training distribution." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I've split my concerns into major and minor ones:\n\n### Major concerns:\n* I don't understand why solving modular addition in scale is important. The other papers that the authors have cited and compared their work to use the setting of modular addition as a means of studying different behaviours of training algorithms or models. The authors mention that it is important in cryptography literature, but they never elaborate on how \"learning to solve modular addition\" with the given inputs is an important task. If we have the angular embeddings, or the integers, or even one-hot embeddings, then solving the task is straightforward. \n\n* **Same setting having different results:** In table 7, the numbers of the bold row (N=20, q=257) are different from the numbers in the first row of Table 8. Don't these represent the exam same setting in running experiments? If so, where is the discrepancy coming from? This setting appears in other tables with other (different) numbers as accuracy as well, which is confusing.\n\n* Section 5.4: If I understand correctly, Figure 5 claims to depict the PCA visualization of the outputs. IIUC, the targets are the angular embeddings of modular sums, the output dimension is 2. I don't see why PCA is needed here, since the output dim is already 2. Furthermore, when MSE is low, it's clear that the outputs must correspond to the angular embeddings of the targets and must be distributed on a circle, and when MSE is high they should not. I don't see how this tells us anything about the internal workings of the model.\n\n* Overall, I think the techniques proposed require a practitioner to know about the structure of the problem (that we're going to solve a modular addition problem) and are not general beyond modular arithmetic. On the contrary, when we know that we're dealing with a modular addition problem, there are far superior approaches to solve the task than learning a deep network.\n\n### Minor concerns:\n* IIUC, Mohamadi et al's claim regarding the need for a fraction of data only applies to the so called \"kernel-regime\" where the network is not allowed to do any feature learning, and doesn't apply to trained networks. \n* For the cryptography use case that the authors have mentioned: does partial correctness (achieving non-trivial but also not 100% evaluation accuracy) matter in the mentioned use case? If not, how can one ensure 100% evaluation accuracy on a given task?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is there a cryptanalytic application where a transformer implementing modular arithmetic, or something close to it, would be preferable to simply calling highly-optimized and accurate modular arithmetic routines?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The techniques used do improve performance on this problem, sometimes drastically, and indeed escape the symmetry-based lower bounds of Mohamadi et al. (2024) by using non-uniform sampling and representations which are not permutation-equivariant. This aligns somewhat with the results of Abbe et al. (2024).\n\nSome of the analyses of the impacts of different decisions in the training process are quite interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper designs an architecture, representation, and dataset to use to train an encoder-only transformer model to perform modular addition of a fixed number of addends modulo a fixed prime." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper presupposes that it is interesting to train an ML model to perform modular arithmetic in order to get good performance. I would vehemently argue, despite the existence of several recent paper which do train ML models to perform modular arithmetic (many of which I do think are quite interesting and with whose details I am very familiar), that this is not of any interest whatsoever. Here is a function far more interesting to cryptanalysis for this task: `lambda q, nums: sum(nums) % q`. This function achieves 100% accuracy for any `N` and `q`, probably runs _many_ orders of magnitude faster than your trained model with _far_ less memory, and doesn't require 240 GPU-hours of training.\n\nSo why is there so much recent work on training ML models to do modular arithmetic? This is _because_ it's such an easy problem, where we can understand what the network is doing when, e.g., exhibiting grokking behavior, or thinking about curriculum design, etc. The focus of these papers is not on obtaining the best learned model, but on what the process of learning on this toy problem can tell us about learning in general.\n\nThus, a paper about obtaining the best ML model to do modular arithmetic seems entirely misguided to me. A paper using modular arithmetic as a case study to investigate problems like curriculum/training distribution design, out-of-distribution generalization, etc could potentially be very interesting! There are a few parts of this paper that touch on things along these lines, and indeed the decisions about representation, the training distribution you use, etc are intriguing. But they're in service of a useless problem. I would suggest instead taking the kinds of decisions you made here to get things to work as an idea to explore in more general cases, taking modular arithmetic as a test case, rather than trying to get the best modular arithmetic network." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What about using active learning/sampling to generate training data? \n- The evaluation uses test data from a particular distribution (uniform). This is standard. But things can be different in applications. What if the test data (ie motivated via the cryptanalysis application mentioned in the intro) have a different distribution? How to adjust the techniques?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The work investigated in detail the existing training methods on the problem, identified potential drawbacks, and proposed corresponding techniques to address them.\n- The work provided empirical evidence that the proposed changes can help." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work considered learning modular addition via transformers at scale and proposed three changes to the modular addition model training pipeline for this purpose: 1) diversifying the training data; 2) an angular embedding, 3) a new loss function. The work showed that these changes lead to improvement for learning at scale, scaling up to N=256 elements modular q=3329. It also showed that these techniques generalize to other modular arithmetic problems (a few specific low degree polynomials modular q)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The problem addressed is quite limited: scaling up for a specific problem of modular addition tested over uniform distirbution. While the work provided some motivation, it is still unclear what's the impact of the work for future research/applications.\n- It is unclear if the technical contributions are significant. The changes proposed are natural and not surprising. Furthermore, although the work tested on a few other modular arithmetic problems, those problems are specific and the evaluation is quite preliminary. It is unclear if the techniques can help for more general learning settings eg other algebraic reasoning tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. **Local Minima at the Origin**: You mention that the model can converge on local minima like the origin of the unit circle, which hinders learning. Since the correct output for a label \\( x \\) would be represented as \\( \\cos(2\\pi x / q) \\) and \\( \\sin(2\\pi x / q) \\), could you clarify why the origin (0,0) acts as a local minimum in this context? It would be helpful to understand how this specific point prevents effective training, given the angular nature of the embeddings.\n\n2. **Digit-wise Tokenization for Modular Addition**: Have you experimented with digit-wise tokenization methods, such as representing numbers as sequences of digits, to evaluate how the model performs on modular addition tasks? It could provide insights into the model's ability to generalize on addition when individual digits are tokenized.\n\n3. **Comparison with Interpretability-focused Work**: In Table 2, many of the related works primarily address interpretability aspects rather than modeling improvements for modular addition. This focus makes direct comparison potentially less relevant. Could you elaborate on why these specific interpretability-focused works were chosen, and consider whether it might be beneficial to compare primarily with approaches that directly aim to enhance modular addition capabilities?\n\n4. **Comparison with Other Embedding Techniques**: Given that you propose a new embedding and custom loss, it would be helpful to see how it compares with existing methods designed for modular arithmetic or general embedding approaches, such as abacus embedding (https://arxiv.org/abs/2405.17399) or dice embedding (https://aclanthology.org/2020.emnlp-main.384.pdf). Have you tried these methods, and if so, how did they perform relative to your angular embedding? This comparison could add further depth to your evaluation of embedding strategies in modular arithmetic tasks." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper’s key strengths lie in its innovative methodology and rigorous validation. The angular embedding and specialized loss function introduce solutions directly tailored to the demands of ML-based modular arithmetic. As modular arithmetic is foundational in cryptography, this work could help drive advancements in ML-powered cryptanalysis. The methodological rigor is enhanced by detailed ablation studies, and the inclusion of visualizations like PCA plots adds clarity, reinforcing the paper's accessibility and value." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the challenge of enabling machine learning models, specifically transformers, to handle modular arithmetic with significantly larger values for \\( N \\) and \\( q \\) than previously studied. Traditional ML models struggle with modular arithmetic, particularly with parameters like \\( N = 6 \\) and \\( q = 1000 \\). This work proposes three key modifications that together enhance the performance of transformers on modular addition tasks:\n\n1. **Enhanced Training Data Diversity**: By including a mix of simpler and rare modular addition examples, the authors aim to help the model generalize effectively.\n2. **Angular Embedding**: This technique maps integers onto a unit circle, aligning better with the periodic nature of modular arithmetic.\n3. **Custom Loss Function**: The authors introduce a specialized loss function designed to prevent convergence on local minima, ensuring that the model learns effectively.\n\nThese methods enable the transformer-based model to achieve high accuracy on modular addition tasks with values up to \\( N = 256 \\) and \\( q = 3329 \\), significantly surpassing prior results. The approach also shows potential for generalization across other modular arithmetic functions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the proposed data distribution and loss modifications are effective, they add complexity. Discussing potential simplifications or alternative approaches for less resource-intensive implementation would be beneficial. While the model performs well up to \\( N = 256 \\) and \\( q = 3329 \\), addressing potential limitations as these parameters increase would add further depth." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We make several improvements that enable transformers to do modular arithmetic with large moduli (up to 3329) and many terms (up to 256)" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024teaching,\ntitle={Teaching Transformers Modular Arithmetic at Scale},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=38hLpTVpe7},\nnote={under review}\n}" }, "abstract": { "value": "Modular addition is, on its face, a simple operation: given $N$ elements in $\\mathbb{Z}_q$, compute their sum modulo $q$. Yet, scalable machine learning solutions to this problem remain elusive: prior work trains ML models that sum $N \\le 6$ elements mod $q \\le 1000$. Promising applications of ML models for cryptanalysis$\\textemdash$which often involve modular arithmetic with large $N$ and $q$$\\textemdash$motivate reconsideration of this problem. This work proposes three changes to the modular addition model training pipeline: more diverse training data, an angular embedding, and a custom loss function. With these changes, we demonstrate success with our approach for $N = 256, q = 3329$, a case which is interesting for cryptographic applications, and a significant increase in $N$ and $q$ over prior work. These techniques also generalize to other modular arithmetic problems, motivating future work." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "transformers", "modular arithmetic", "math" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e2d48df0ee22753041ca0443c9e7fce2d3523e1d.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Teaching Transformers Modular Arithmetic at Scale" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
38kLrJNwaM
LEASE: Offline Preference-based Reinforcement Learning with High Sample Efficiency
main
Active
preference-based reinforcement learning;sample efficiency
reinforcement learning
5;5;6
3;4;3
2;2;3
2;2;3
3;2;3
5.333333
3.333333
2.333333
2.333333
2.666667
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the Weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed LEASE algorithm is novel, which introduces a new way to handle the challenge of limited preference data with a learned transition model. The selection mechanism for unlabeled data based on confidence and uncertainty is a thoughtful contribution to improving the stability and accuracy of the reward model.\n\n-Theoretical Framework: The paper attempts to provide a theoretical foundation for the algorithm, which is a step towards more principled offline PbRL methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel algorithm for offline preference-based reinforcement learning (PbRL) aimed at addressing the challenges of designing rewards and the high costs of online interaction. The LEASE algorithm leverages a learned transition model to generate unlabeled preference data, which is then filtered through an uncertainty-aware mechanism to ensure the performance of the reward model. The paper claims to provide a generalization bound for the reward model and a theoretical improvement guarantee for the policy learned by LEASE. The experimental results are said to demonstrate that LEASE can achieve comparable performance to baseline methods with fewer preference data without online interaction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the paper provides a theoretical analysis of the algorithm, it is not rigorous enough. For example, the approximation error of the reward function usually depends on a condition number that is exponential to $R_{\\text{max}}$ when learned from preference data [1], but it seems missing from the derived bound. The approximation error of reward error does not directly translate into an additional error term in the performance bound and requires careful treatment (e.g., use a pessimistic reward function [2]). The authors use a handwaving argument (i.e., the law of large numbers) to derive (A.24) from (A.23), but it is not accurate. Using a concentration inequality is necessary in the finite sample case.\n\n- The paper lacks some benchmarks and baselines to validate the effectiveness of the proposed method. For benchmarks, The D4RL benchmark is known to be insensitive to the accuracy of the reward function [3], and adding benchmarks like Meta-World would greatly strengthen the paper. Also, there are some recent works on offline PbRL that have a strong performance like [4,5], and LEASE should be compared with them.\n\n\nReferences\n\n[1] Pacchiano, Aldo, Aadirupa Saha, and Jonathan Lee. \"Dueling rl: reinforcement learning with trajectory preferences.\" arXiv preprint arXiv:2111.04850 (2021).\n\n[2] Hu, Hao, et al. \"The provable benefits of unsupervised data sharing for offline reinforcement learning.\" arXiv preprint arXiv:2302.13493 (2023).\n\n[3] Li, Anqi, et al. \"Survival instinct in offline reinforcement learning.\" Advances in neural information processing systems 36 (2024).\n\n[4] Kim, Changyeon, et al. \"Preference transformer: Modeling human preferences using transformers for rl.\" arXiv preprint arXiv:2303.00957 (2023).\n\n[5] Zhang, Zhilong, et al. \"Flow to better: Offline preference-based reinforcement learning via preferred trajectory generation.\" The Twelfth International Conference on Learning Representations. 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In addition to Weaknesses 1-3, we have following questions.\n1. Some details of the algorithm are missing. In Line 6 of Algorithm 1, which policy is used together with the transition model to generate the data? In Line 8, the “reward update condition” is not clear. Additionally, there lacks an explanation for the “the reward model only update once instead of updating constantly in this process” statement.\n2. Can Theorem 1 show that using augmented data is better? When $N_u=0$, it is clear that the constant term increases, but the pseudo-labeling error becomes zero. Is it possible that the bound is even tighter with no augmentation?\n3. How well is the learned transition model? A bad transition model can lead to poor augmentation quality.\n4. In Figure 3, “the linear relationship between reward predicted by LEASE and ground truth is better” is not very clear. What is the possible reason that FRESH’s predictions are very narrow?\n5. Some minor problems:\n(1) In the “Model-based Reinforcement Learning” part in section 2, model-based RL is not always offline, and the presentation is a bit.\n(2) A ‘tilde’ is missing for $N_u$ in Equation 9." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-organized, with the methodology and theoretical contributions presented clearly, making the core ideas easy to understand.\n2. Improving sample efficiency in offline PbRL is an important and challenging problem, with significant implications for many real-world applications.\n3. The paper provides contributions both in empirical algorithm development and theoretical analysis. Although there are concerns regarding the assumptions, the theoretical contributions add valuable insight into understanding the role of reward models and policy performance in PbRL.\n4. In addition to benchmark scores, the paper includes experiments that evaluate performance with varying amounts of preference data, as well as an analysis of the relationship between the learned reward model and the ground truth. This helps in understanding the effects of the different components of LEASE." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the problem of improving sample efficiency in offline preference-based reinforcement learning (PbRL). The proposed LEASE algorithm aims to address this issue by generating synthetic unlabeled segment pairs using an ensemble of learned transition models. These synthetic pairs are subsequently labeled with an ensemble of pre-trained reward models, followed by a filtering process that ensures the quality of the pseudo labels. The filtering mechanism employs a confidence principle, which requires that the models have high certainty in discriminating between segment preferences, and an uncertainty principle, which stipulates low variance in the predictions of the ensemble models. An offline RL algorithm can then be employed to learn on the augmented labeled dataset. The paper also supports its claims with theoretical analysis, providing an upper bound on the reward model's error learned on an augmented dataset and a bound on the policy improvement. The empirical results on the D4RL benchmark demonstrate that LEASE achieves comparable performance to baselines while using less preference data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed LEASE algorithm closely follows the pipeline of Surf[1], with only two major differences: (1) Surf augments data using random temporal cropping, whereas LEASE generates synthetic data with a learned transition model; (2) Surf only uses confidence for label filtering, whereas LEASE employs both confidence and uncertainty principles. Despite these differences, a direct comparison with Surf is missing, which would help clarify the effects of the introduced transition model and the uncertainty principle on the final performance. Without this comparison, it is challenging to establish the uniqueness or superiority of LEASE.\n2. The main results in Table 1 compare LEASE with URLHF, a previous baseline that uses more data than LEASE. However, it is crucial to also include a comparison with URLHF using the same amount of data as LEASE. This would allow readers to determine whether LEASE truly achieves superior performance with fewer data or if the perceived gains are simply due to the different quantities of data used. The current results are not sufficiently convincing without this comparison.\n3. The theoretical analysis relies on assumptions that may be unrealistic in practical settings, and the connection between theory and the empirical algorithm is weak. Specifically:\n - Assumption 2: Given a fixed learned reward model, it is possible to construct an adversarial unlabeled dataset such that the pseudo-labeling error \\(\\eta\\) becomes very large. More detailed analysis is needed to understand whether the specific data generation process of LEASE can mitigate such worst-case scenarios effectively.\n - Filtering Mechanism: While it is intuitive that filtering low-quality data improves labeling accuracy, there is no clear theoretical justification for why the proposed filtering mechanism (confidence + uncertainty) is superior to previous methods that only use confidence.\n\n[1] Jongjin Park, Younggyo Seo, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. Surf:\nSemi-supervised reward learning with data augmentation for feedback-efficient preference-based reinforcement learning. In 10th International Conference on Learning Representations, ICLR 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem of preference efficiency is vital and fundamental in offline meta-RL.\n2. The proposed method is sound and sensible.\n3. Experiment results are convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel model-based offline RL algorithm that improves the efficiency of utilizing limited preference data. LEASE utilizes a learned transition model to rollout data, and label preferences with confidence and uncertainty measures. LEASE can achieve high performance with as few as 100 queries on mujoco tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of analysis on the effect of transition model accuracy. The learned transition model can be inaccurate, and accumulate errors during rollout. Will this do a lot of damage to algorithm performance?\n2. Lack of analysis of baseline algorithms' performance with different numbers of preference data. How much data is needed for baseline algorithms to achieve comparable performance to LEASE?\n3. I would like to see results of more baseline algorithms, e.g., previous model-based offline RL algorithms." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024lease,\ntitle={{LEASE}: Offline Preference-based Reinforcement Learning with High Sample Efficiency},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=38kLrJNwaM},\nnote={under review}\n}" }, "abstract": { "value": "Offline preference-based reinforcement learning (PbRL) provides an effective way to overcome the challenges of designing reward and the high costs of online interaction. However, since labeling preference needs real-time human feedback, acquiring sufficient preference labels is challenging. To solve this, this paper proposes a offLine prEference-bAsed RL with high Sample Efficient (LEASE) algorithm, where a learned transition model is leveraged to generate unlabeled preference data. Considering the pretrained reward model may generate incorrect labels for unlabeled data, we design an uncertainty-aware mechanism to ensure the performance of reward model, where only high confidence and low variance data are selected. Moreover, we provide the generalization bound of reward model to analyze the factors influencing reward accuracy, and demonstrate that the policy learned by LEASE has theoretical improvement guarantee. The developed theory is based on state-action pair, which can be easily combined with other offline algorithms. The experimental results show that LEASE can achieve comparable performance to baseline under fewer preference data without online interaction." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "preference-based reinforcement learning", "sample efficiency" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d6fbdc6f11cec95660865c5830ef43e9b1c1a310.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/5261634b1dc75e5ed12c0ad15269e442c9b1bc7b.zip" }, "title": { "value": "LEASE: Offline Preference-based Reinforcement Learning with High Sample Efficiency" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
39JM3A3KS3
Revisiting On-Policy Deep Reinforcement Learning
main
Active
Deep reinforcement learning;on-policy;policy gradients
reinforcement learning
3;3;3;5;6
4;4;3;5;2
2;2;2;3;3
2;1;2;2;3
4;3;2;3;3
4
3.6
2.4
2
3
-0.310087
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Some minor comments:\n\nThere seem to be several details missing, e.g. what implementation is used to produce the baseline for PPO; what is the actually benchmark that's being used (the paper generically cites Mujoco), etc.. (Apologies if I've missed these.) \n\nSome citations are messed up (e.g. bottom of page 3).\n\nThe paragraph starting in line 294 on page 6 is not clear." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is mostly clearly written. It proposes several reasonable, albeit well known, algorithmic components and integrates them into the PPO algorithm. It shows experimental results that suggest that these modifications can lead to improvements compared to a vanilla PPO baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes several modifications to the PPO algorithm: entropy regularization, off-policy value function learning, and discounting of the state distribution. It shows experimental results that investigate the effect of these modifications and compares them to a vanilla PPO implementation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "None of the proposed modifications is novel, they have all been well studied in the literature. The paper dedicates a significant amount of space to reviewing these fairly well known ideas. I don't think that merely putting them together in a new combination is in itself a significant contribution.\n\nThe experiments are not conclusive since important comparisons to SOTA off-policy algorithms are missing. Since the paper introduces effectively an off-policy component into the algorithm (with the need to implement a replay buffer etc.), I would have really liked to see this comparison. Indeed the authors state (in the limitations) that the proposed combination of algorithmic components underperforms such existing algorithms which begs the question why one should use the combination proposed in the present paper. (NB, some off policy algorithms such as MPO also use a trust region and in that respect bear similarities to PPO.)\n\nFor this to be a strong paper I would have expected an insightful discussion why the specific algorithmic combination should be particularly useful / interesting, a demonstration that it clearly outperforms existing algorithms on relevant problems, and a detailed analysis why this is the case." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I do not have any direct questions to the authors aside from those listed in the weaknesses section above. \nMy main concern with the paper is the rigor of the experimental evaluation which in addition with a lack of novelty for the suggested improvements leave me wanting for clear conclusions I would trust after reading the paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The adjustments made to PPO are reasonably well motivated and pull directly from the existing literature on off-policy methods. \n- There is a lack in the literature for good empirical evaluations of existing RL algorithms in fair comparisons; bridging the gap between on and off-policy methods (as done here) certainly fills part of this void.\n- The stability of PPO is of high practical relevance for varying applications from control to RLHF of large models and thus any improvements are relevant to the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper revisits on-policy RL, which is still one of the most predominant paradigms for learning controllers in simulation (or nowadays for RLHF of large models) since on-policy RL can give high quality (and minimally biased) policy improvement. The authors note that despite the simplicity of the theory underlying basic on-policy algorithms, in practice (partially due to the fact that on-policy algorithms have to trade-off optimization and exploration) they can be brittle/sensitive to hyperparameter settings.\n\nThe authors revisit the and robustify a popular on policy method (PPO) utilizing some of the insights from the recent literature on policy optimization; e.g. taking inspiration from recent results from the off-policy literature (i.e. SAC and others) such considering a maximum entropy formulation and learning an action-value (Q-function) critic instead of a state value function." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments are unfortunately fairly limited in scope. Only 6 mujoco control domains are used and only two of them (and and humanoid) would be considered high dimensional in 2024. This limits the evidence that the paper can present for its suggested modifications seveerly. \n2. The presentation of the experiments is lacking:\n2a: A comparison to baseline PPO is presented on two domains in Figure 1 and 2. With PPO failing on the high dimensional domains. This doesn't inspire huge confidence in the results. What is causing this? Is the asymptotic performance fine and the main difference is just the speed-up from the Q-function and standard PPO would just need to run much longer?\n2b: Further Figure 3 ablates some choices of the algorithm but again seems lacking. We get no insight into which of the proposed modifications exactly makes things work. For example: how would standard PPO but with a Q-function do? It also seems like PPO without discounting could be fine on-policy (but we are missing those results here, i.e. the combination of on-policy and no discounting).\n2c: A the practical implementations of PPO for any domain with higher dimensional observations (or larger models) might consider computing the loss only on a trajectory snippet extracted from a full episode. It is unclear how that would affect e.g. the discounting.\n3. Out of the three proposed modifications two are already routinely considered in the literature/implementations: entropy regularization is a standard feature in many PPO implementations; using discounting for the 'policy gradient' loss has been considered multiple times in the literature (also partially noted by the authors) and not been consistently proven to make a big difference, so most implementations omit it. This leaves the reviewer thinking that the main contribution is to consider learning an action-value critic off-policy, but unfortunately the experiments do not properly ablate and compare this modification (see above).\n4. In many applications it is generally hard to learn an action-value critic (since conditioning on high-dimensional actions comes with it's own problems) especially when dealing with large models and or large action spaces so the algorithm here may not be generally an improvement in all cases (e.g. the situation might look very different for RLHF of large models or for experiments requiring vision inputs)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Paper is well written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces PPO+. PPO+ aims to augment the current PPO algorithm with best-known practical practices from well-known algorithms such as SAC and TD3, as well as theoretical principles, to improve the performance and sample efficiency of PPO. These features are: 1) using off-policy data by introducing a replay buffer, 2) learning a Q-function instead of only a value function, 3) using an entropy bonus, and 4) discounting the state distribution." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "--This paper presents some interesting ideas, but I think it could be strengthened by highlighting its contributions more clearly. For example, incorporating off-policy data with a replay buffer and learning a Q-function instead of a state-value function shifts the algorithm towards the off-policy RL domain. To really showcase the algorithm's effectiveness, it would be beneficial to see comparisons with established off-policy algorithms like SAC and TD3. This would provide a clearer picture of its performance within the broader context of off-policy RL.\n\n--It's also worth noting that one advantage of on-policy algorithms is their ability to learn by fitting only a value function, which can be simpler than fitting a Q-function. Introducing Q-learning in this context might add complexity, which seems to contrast with the authors' claim of increased simplicity. It would be helpful to see further discussion on this design choice and its potential implications in the context of on-policy RL.\n\n--Adding an entropy bonus is a well-established technique, having been introduced in the original PPO paper. The entropy weight is already a standard hyperparameter in most PPO implementations. More discussion on how the use of the entropy bonus here differs from standard PPO would be helpful. \n\n--Authors noted, reintroducing discounting to the state distribution doesn't yield significant performance improvements. A discussion on in which scenarios using a discounted state distribution would be beneficial would also be helpful.\n\n--Finally, the experimental results presented aren't entirely conclusive. In some domains, PPO performs better than PPO+. It's more fair to compare PPO+ with off-policy algorithms. However, as the authors mentioned, their method doesn't currently outperform SAC or TD3, despite incorporating many of the components from those algorithms. This raises questions about the specific benefits and potential advantages of the proposed modifications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could you elaborate on the performance gap between PPO+ and off-policy methods like SAC and TD3? What are the potential challenges and opportunities for bridging this gap within the on-policy framework?\n- Have you considered evaluating PPO+ on other benchmark environments beyond MuJoCo control tasks?\n- Given PPO+’s slight increase in complexity, do you have insights into how it compares in terms of training time relative to PPO, especially as task dimensionality increases?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper clearly identifies the limitations of PPO and motivates the need for a more principled approach.\n- The paper thoroughly reviews relevant literature on on-policy RL, maximum entropy RL, trust region methods, and actor-critic methods, effectively placing the proposed approach within the existing litterature.\n- PPO+ presents theoretically grounded modifications, such as leveraging off-policy data in on-policy settings, which could broaden the applicability of PPO and reduce the need for extensive tuning.\n- Experimental results, especially on MuJoCo environments, show consistent improvements over PPO, suggesting that PPO+ delivers better results in continuous control tasks.\n- The ablation studies strengthen the authors' claims, providing insights into how each enhancement (e.g., entropy regularization) impacts performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces PPO+, a new on-policy deep reinforcement learning algorithm that builds upon and improves the Proximal Policy Optimization (PPO) algorithm. The authors identify several key limitations of PPO, including sensitivity to hyperparameters and deviations from the theoretical foundations of on-policy RL. They propose solutions to address these shortcomings, resulting in an algorithm that is more principled, robust, and achieves state-of-the-art performance for on-policy methods on MuJoCo control tasks. PPO+ incorporates three major improvements: (1) correct discounting in policy gradient computation, (2) integration of off-policy data for critic learning, and (3) maximum entropy regularization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experiments are currently limited to MuJoCo control tasks. Evaluating PPO+ on a wider range of environments would provide more comprehensive proof of its capabilities.\n- A discussion of the performance gap between PPO+ and off-policy counterparts would strengthen the paper.\n- While the focus on PPO is understandable given its popularity, the paper would benefit from comparing PPO+ to other on-policy algorithms beyond PPO.\n- The authors acknowledge that optimizing for the discounted objective increases sensitivity to the choice of discount factor. While they present results for two different discount factors, further investigation into this sensitivity and strategies for mitigating it would enhance the practicality of PPO+." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "### Questions\n\n1. Could the authors provide more empirical evidence supporting that PPO+ is less hyperparameter-sensitive? Specifically, how does PPO+ perform across a range of hyperparameter settings compared to PPO across that same range? Additionally, does the introduction of new hyperparameters such as entropy regularisation, number of critics, replay buffer size now make it even harder to tune for new environments?\nWhat is the effect of the CrossQ modifications? Do you use them with the PPO baseline as well. Is there an ablation of PPO+ without using the crossQ modifications?\n3. Why was only MuJoCo used for evaluation? Would the authors be willing to extend their tests to additional benchmarks such as discrete action environments or grid-based tasks to validate the generalizability of PPO+?\n4. Could an ablation study comparing the separate backbone used in PPO+ with a shared backbone approach be added to verify that the performance gains are due to the proposed modifications and not just architectural differences?\n\n### Suggestions\n1. **Use of Evaluation Methodology**: Consider using evaluation methodology like [rliable](https://github.com/google-research/rliable) to present more statistically robust results.\n2. **Additional Comparisons**: Including at least one other on-policy algorithm (e.g., VMPO or SPO) would provide valuable context and strengthen the impact of the results.\n3. **Diversify Environment Tests**: Extending the evaluation to other types of environments and presenting results where the hyperparameters are consistent across these tests could better support the claims of reduced hyperparameter sensitivity.\n\nUltimately, I liked the paper but I think without an ablation on the shared torso i.e. using one for PPO baseline, and without a different environment suite of results, i am not willing to accept the paper. Additionally, the use of crossQ modifications concerns me as we dont fully know the interaction of these modifications. Its possible a lot of the results come from here as well. If my core concerns are addressed, I’m willing to raise my score." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Clarity and Methodology**: The paper is clearly written and thoroughly explains the methodology behind PPO+. It provides pseudocode and hyperparameter details, making the algorithm implementation straightforward and moderately reproducible.\n2. **Combination of Techniques**: Although the modifications themselves may not be entirely novel, the combination presented in PPO+ and its alignment with theoretical foundations is practical and beneficial for the RL community especially since its relatively easy to implement.\n3. **Ablation Studies**: The ablations are well-constructed, demonstrating the effectiveness of the individual components introduced in PPO+." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper aims to address the limitations of PPO by introducing a variant called PPO+. The authors propose several theoretically motivated modifications to reduce hyperparameter sensitivity and eliminate reliance on implementation-level tricks. PPO+ is designed to maintain the simplicity of PPO while aligning more closely with the theoretical principles of on-policy reinforcement learning. The paper evaluates PPO+ on MuJoCo benchmarks and claims state-of-the-art performance among on-policy methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Comparisons**: The paper's claims of state-of-the-art performance are only made relative to PPO. Including comparisons with other on-policy methods like VMPO (https://arxiv.org/abs/1909.12238) or SPO (https://arxiv.org/abs/2402.07963) would provide a more comprehensive context and strengthen the results as I do not believe you can make this claim given only PPO as a comparison.\n2. **Backbone Usage**: PPO+ uses a separate backbone for the critic and actor networks, while PPO does not. Previous work by Andrychowicz et al. (2021) suggests that separate networks generally perform better. The lack of an ablation study comparing the shared and separate backbones in PPO+ raises serious concerns about the true source of performance gains.\n3. **Hyperparameter Sensitivity Claims**: The authors don’t explicitly claim that PPO+ is less hyperparameter-sensitive than PPO however they state it as a motivating factor for the creation of it and the paper does not provide any empirical evidence to support this. Without testing the robustness across different environments or hyperparameter settings, it seems that PPO+ doesn’t necessarily address a core motivation.\n4. **Limited Evaluation**: The evaluation is restricted to Mujoco environments. While this is a common benchmark, it is not sufficient to demonstrate that PPO+ consistently outperforms PPO or is more generalised. Testing in a broader set of environments, like grid-based or discrete action spaces, would provide more robust support for the authors' claims.\n5. **Overfitting to MuJoCo**: Given the limited environment diversity and potential over-tuning for Mujoco tasks, it is unclear if PPO+ is truly an improvement over PPO or merely a set of optimisations tailored to a specific domain." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We enhance PPO by using discounted policy gradients, off-policy data to train the critic, and adding maximum entropy bonus, while simplifying the implementation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting On-Policy Deep Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=39JM3A3KS3},\nnote={under review}\n}" }, "abstract": { "value": "On-policy Reinforcement Learning (RL) offers desirable features such as stable learning, fewer policy updates, and the ability to evaluate a policy’s return during training. While recent efforts have focused on off-policy methods, achieving significant advancements, Proximal Policy Optimization (PPO) remains the go-to algorithm for on-policy RL due to its apparent simplicity and effectiveness. However, despite its apparent simplicity, PPO is highly sensitive to hyperparameters and depends on subtle and poorly documented tweaks that can make or break its success--hindering its applicability in complex problems. In this paper, we revisit on-policy deep RL with a focus on improving PPO, by introducing principled solutions that enhance its performance while eliminating the need for extensive hyperparameter tuning and implementation-level optimizations. Our effort leads to PPO+, a methodical adaptation of the PPO algorithm that adheres closer to its theoretical foundations. \nPPO+ sets a new state-of-the-art for on-policy RL on MuJoCo control problems while maintaining a straightforward trick-free implementation. Beyond just performance, our findings offer a fresh perspective on on-policy RL that could reignite interest in these approaches." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Deep reinforcement learning", "on-policy", "policy gradients" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a34e12c95f64b642562b9c6f6c97dcf1d1a953bc.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Revisiting On-Policy Deep Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
39n570rxyO
Towards Generalisable Time Series Understanding Across Domains
main
Active
Time Series Analysis;Multi-Domain;Self-Supervised Learning
learning on time series and dynamical systems
3;5;5;5;6
4;3;4;4;4
2;3;2;2;3
2;3;2;2;3
2;2;3;4;4
4.8
3.8
2.4
2.4
3
-0.102062
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See details in weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper targets a very important research problem, the time series foundation model. Because the time series data has very high variance across different domains and different tasks. How to integrate them and train one foundation model remains challenging.\n2. This paper has a very high quality of preparation. The writing, the organization, and the figure are prepared nicely and with enough details.\n3. The results shown in Tab 1, 2, and 3 are competitive compared to baseline TS models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents OTiS, a pre-trained foundation model on large-scale time series data to support multi-tasks across domains. Extensive experiments are conducted to demonstrate the powerful performance of the foundation model. This paper is prepared in high quality and can be accepted." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Add a subsection to show which category baselines will be compared. For example, traditional TS model, deep learning, TS foundation model, etc.\n2. I expect a comparison with some SOTA TS foundation models. For example, https://arxiv.org/abs/2405.02358 . If this part is added, that would be great.\n3. Currently, the author use fine-tune to adapt the pre-trained model to various downstream tasks. Can you also add one more subsection to test the prompting on this TS foundation model? That would be another great point." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please feel free to address any misunderstandings I've stated in the weaknesses. Answers to the following questions would help me better calibrate my score:\n1. How long does it take to finetune on new tasks?\n2. How does the finetuned model perform compared to task-specific models? Are these testing datasets really good cases that need pre-trained models?\n3. How do you get the ground truth embeddings in Figure 3?\n4. Is there any intuition around what information could be shared across such different domains to make pre-training on them useful?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper has many strengths:\n* The key idea to condition the model on different variables and domains is good. Indeed many related works effectively ignore this information.\n* The paper is overall written quite well and arguments are presented clearly.\n* The experiments investigate multiple axes, including ablation of their method and different dataset and model sizes, and visualizations of the embeddings.\n* The public data and model weights will help the community build on this work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a time series model architecture and a pre-training objective. The key idea is that their architecture acknowledges that training and testing time series may have different sampling rates and variables. The authors propose a straightforward tokenization scheme to learn embeddings for different variables, which can get added onto regular patch and temporal embeddings, thereby conditioning the predictions on the measured variables. They then pre-train their model on a collection of existing datasets, and evaluate its performance by finetuning on new datasets for some forecasting, regression, and classification datasets. They find that finetuning their model on new datasets can outperform other recent methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper has weaknesses to address:\n* The major weakness of this paper is the extremely limited experiments section. There are many experiments, yet almost no explanation of how they're run or interpretation of the results. Most of the results are written like an advertisement, mostly just stating the method outperforms others. This leaves the reader unclear why the performance gains happen. Ultimately it's not clear when/why the findings would generalize. The result is that some claims appear to be quite overstated. For example, L423-L424 states *\"embeddings of domains with shared high-level semantics cluster together, as depicted in Appendix E.1. For example, embeddings of mono and stereo audio group closely, as do those of banking and economics.\"* But this is cherry-picked---Temperature is way closer to Mono and Stereo Audio than Banking is to Economics.\n* Similarly, many important experimental details are missing or relegated to the Appendix, and the Appendix also includes almost no explanations or interpretations. For example, the PCA experiments in Figures 3, 7, and 8 aren't explained.\n* It's unclear how many variables actually overlap between training/testing, which seems to be a key element to make the model outperform others. Yet this isn't analyzed. Showing that others fail by ignoring other variables should be a key element of the experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Authors use Github Link to share the code, leading to potential personal information leakage. This may require further investigation." }, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How does the domain-specific tokenizer adapt to unseen domains with distinct variate structures?\n\n- Additionally, how does the domain-specific tokenizer generalize across different systems within the same domain? For instance, while both electrical transformers and power generators belong to the \"energy\" domain, they exhibit differing properties and produce distinct time series readings. How does the sub-domain adaptation discussed in Section 3.1 address this scenario?\n\n- A broader question, not specific to this paper: At what level of granularity should we define the domain?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and easy to follow. \n\n- Authors spot the important fact that the variate structure is heterogeneous across domains and this structure may represent more complex relationships. \n\n- The visualization for variate embedding seems to be interesting and insightful. \n\n- A substantial portion of this research focuses on EEG signals, which presents a novel and promising approach. The authors introduce an innovative method to model a \"specific set of systems\" that, despite being observed differently—such as TDBrain and SEED with 19 channels versus LEMON with 62 channels—remain comparable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors spot the important fact that the variate structure is heterogeneous across domains and this structure may represent more complex relationships. Thus, they propose a time series pre-training pipeline called OTiS. The OTiS is composed of a specially designed tokenizer that can add domain-specific signature to the time series and a novel loss for pretraining." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- As noted in the strengths, this work addresses the challenge of generalizing across datasets that contain time series of similar systems but are recorded differently, such as variations in sampling rates and physical values. However, the claims regarding cross-domain generalization may be overstated.\n\n- From the perspective of generalized time series analysis, the primary contribution of variate-specific embedding may not be effective in other systems where the interrelationships between variates are not as straightforward as their spatial arrangement (e.g., the electrodes in EEG as depicted in Figure 3 of the manuscript). In different physical systems, two variates may exhibit complex computational relationships (e.g., voltage and current as described by Ohm's Law), complicating the direct modeling of variates as embeddings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written, and the method is easy to understand. The authors clearly articulate how they consider the heterogeneity of different domain time series to achieve multi-domain time series forecasting.\n\n2. This paper focuses on the problem of multi-domain time series analysis, which is crucial for building generalizable foundational models for time series.\n\n3. The experimental section utilizes a large amount of data, and the model is open-source, contributing particular engineering value to the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents OTiS for multi-domain time series analysis, building on existing pre-training paradigms for time series. It allocates domain-specific variable embeddings to distinguish the heterogeneity of different variables across domains and enhances the model's ability to learn temporal causal relationships through a dual-masking strategy. Additionally, it introduces NCC loss to capture global patterns. Experimental results demonstrate that the proposed method achieves competitive performance in time series classification, regression, and forecasting tasks across multiple domains compared to SOTA methods. Visualization results further highlight the effectiveness and interpretability of the domain-specific variable embeddings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper mentions that one of the challenges of cross-domain time series models is the significant differences in temporal dynamics and sampling frequencies among different domains. However, the paper uses the same patch size for all domains when dividing patches, failing to accommodate the unique sampling rates of different domains. This oversight means the paper does not sufficiently consider the differences in sampling rates across domains. Additionally, using a shared patch projector to encode the temporal dynamics within each patch does not adequately address the differences in temporal dynamics between domains. While this approach may be common in previous works, it does not consider the temporal heterogeneity among domains.\n\n2. The method of considering variable heterogeneity through learned variable embeddings is not uncommon. In spatiotemporal prediction, some methods [2][3] have already employed learnable embeddings to explicitly distinguish heterogeneous spatiotemporal patterns by learning time-specific and space-specific parameter spaces.\n\n3. [1] proposed using textual descriptions to label different time series domains for cross-domain time series forecasting, utilizing a channel-independent strategy. In contrast, the domain-specific variable embeddings in this paper correspond to a channel-mixing strategy. I look forward to seeing a comparison between these two strategies in cross-domain time series.\n\n4. The experimental section lacks details about the baselines. How were these methods selected? Were they pre-trained and fine-tuned? If so, what data was used for pre-training and fine-tuning?\n\n5. How does the performance of the proposed method compare to conventional time series classification or forecasting methods trained on a single specific dataset?\n\n[1] UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series Forecasting, WWW, 2024\n\n[2] Heterogeneity-Informed Meta-Parameter Learning for Spatiotemporal Time Series Forecasting, KDD, 2024\n\n[3] Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting, NeurIPS, 2020" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Have you tried to pre-train separately according to different domains and then fine-tune it for domain-specific downstream tasks? As observed from Table 1, there are several discrepancies in different domains, such as the frequencies of Economics and EEG. Is it possible that separating datasets to pre-train domain-specific models works better?\n2. The proposed method uses a fixed context for pre-training. Padding a large pre-training corpus, which generally contains univariate time series, into a fixed temporal/variate dimension. Will it cause a waste of computing resources?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper researches an important question about generalizable time series understanding across diverse domains.\n2. This work presents a large pre-training corpus, which can be greatly beneficial if the datasets are released.\n3. The method exhibits promising results in handling multivariate time series analysis by leveraging variate and domain signatures." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents OTiS, a deep model pre-trained on a large corpus (11B) for general time series analysis. In this paper, the authors highlight the challenge of heterogeneity when applying self-supervised pre-training on time series. An MAE-style pre-training method is adopted to obtain a general tokenizer for multivariate time series, and then different task heads are introduced to complete time series analysis tasks. The model demonstrates strong performance across 15 diverse applications, including time series classification, regression, and forecasting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My major concern is about the novelty of the proposed method: The design of the encoder/decoder is very identical to MAE. Is there any adaptation for the time series modality? For example, considering the inherent reconstruction difficulties of time series and adjusting the mask ratio compared with the vision modality?\n2. About the model design towards generalizable time series understanding: As the authors mention an important challenge of heterogeneity, I am slightly unconvinced that a shared unified patch embedding/projector can reflect different semantics among variates and domains, even if the patch is completely the same. Prior to this, Moirai adopted different patch sizes for different frequencies, will it further enhance OTiS?\n3. This work adopts learnable embeddings as variate/domain signatures. I am convinced that the signatures can \"distinguish\" them, but how can they explicitly \"capture inter-variate relationships\"? This approach may also limit the generalization scope as the learned signatures do not apply to unseen variates/domains during inference.\n4. About the experiments: Results of classification are not compared with supervised, trained deep models, for example, TimesNet and ModernTCN. For the regression rask, can you introduce some variate-centric models into this baseline, such as iTransformer? As for forecasting, the average improvement does not seem significant compared with PatchTST. Also, can you provide some explanations about Table 3 why OTiS has a significant improvement on some datasets (such as ETTh2) and a great degeneration on similar datasets like ETTh1?\n5. A minor suggestion: the name \"dual masking strategy\" can be somewhat overstated to me, which generally refers to dual or antagonistic behavior (e.g., minimax). I would prefer to simplify the contribution as a \"mixture\" (of masking modeling and generative modeling in this paper), which is a common technique in fact. Also, I would like to know how the ratios (25% - 75% in this paper) of the two strategies are determined.\n6. The pipeline of using the masked pre-trained models seems still somewhat tedious, i.e., lacking in generalization. Supervised training should be performed after large-scale pre-training. Can the author provide an overall promotion compared with training from random initialization, or try zero-shot generalization on downstream tasks?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Generalisable Time Series Understanding Across Domains},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=39n570rxyO},\nnote={under review}\n}" }, "abstract": { "value": "In natural language processing and computer vision, self-supervised pre-training on large datasets unlocks foundational model capabilities across domains and tasks. However, this potential has not yet been realised in time series analysis, where existing methods disregard the heterogeneous nature of time series characteristics. Time series are prevalent in many domains, including medicine, engineering, natural sciences, and finance, but their characteristics vary significantly in terms of variate count, inter-variate relationships, temporal dynamics, and sampling frequency. This inherent heterogeneity across domains prevents effective pre-training on large time series corpora. To address this issue, we introduce OTiS, an open model for general time series analysis, that has been specifically designed to handle multi-domain heterogeneity. We propose a novel pre-training paradigm including a tokeniser with learnable domain-specific signatures, a dual masking strategy to capture temporal causality, and a normalised cross-correlation loss to model long-range dependencies. Our model is pre-trained on a large corpus of 640,187 samples and 11 billion time points spanning 8 distinct domains, enabling it to analyse time series from any (unseen) domain. In comprehensive experiments across 15 diverse applications - including classification, regression, and forecasting - OTiS showcases its ability to accurately capture domain-specific data characteristics and demonstrates its competitiveness against state-of-the-art baselines. Our code and pre-trained weights are publicly available at \\url{https://github.com/OTiS-official/OTiS}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Time Series Analysis", "Multi-Domain", "Self-Supervised Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e4d2cf724e8953bd4834c764d48045f0827da18e.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/662380416ac92716a8d258d31a6b8c8e293e316d.zip" }, "title": { "value": "Towards Generalisable Time Series Understanding Across Domains" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3A71qNKWAS
LongGenBench: Benchmarking Long-Form Generation in Long Context LLMs
main
Active
Long context LLMs; Long-form generation; Benchmark
datasets and benchmarks
3;5;6;6;6
3;4;5;3;3
2;3;3;3;3
2;2;4;3;2
3;3;2;2;3
5.2
3.6
2.8
2.6
2.6
0.300123
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- question 1: can you present a bit more concise takeaways from your benchmark, to me I feel like I was reading a lot of pieces, and no surprising results to me either. It might be good to have some failure cases to support your point\n\n- question 2: when you evaluate the prompt complexity, how do you choose the prompt formats?\n\n- question 3: Do you think complex prompts and instructions might need manyshots?\n\n- question 4: Do you think you can add some reasoning axes to your benchmark? \n\n- question 5: maybe consider adding some long-context recent models with SOTA results, not only looking at model parameter counts but also architectural differences." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- First benchmark focusing on long-form generation during the test time\n- The evaluation combines both complexities of evaluation prompts and different scenarios\n- First batch of results on 10 mainstreamed LLMs\n- The paper is easy to follow" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces LongGenBench, a benchmark for measuring LLMs' capacities, especially their long-context abilities, by generating long-form context from 16k to 32k with rather complex instructions. This new dataset departs from traditional benchmarks aiming at decoded length in four different scenarios. Preliminary evaluations are done with main streamed LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I am a little bit distracted from the main takeaways from the experimental studies, and not so convinced with failure cases. See question 1\n\nI have other minor concerns regarding the experiment setup\n\n- There has been much research showing that the prompt format matters, what's your thought? \n\n- Reasoning tasks are not well involved, as o1 seems to argue that longer decoded length is helpful with reasoning complex tasks, in your benchmark, you might want to add an axis of reasoning ability clearly or have some analysis around this topic?\n\n- Most of the evaluation focused on existing transformer-based architecture, but models are presenting SOTA results, for example, mamba-based models. Are those models, with good inference time complexity, good at benchmark, or if not, why?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* How do you split the long generation and match them to all the subtask instructions?\n* How do you check if every sub-instruction is satisfied? is it by prompting another LLM or by word matching, etc.?\n* What's the significant difference between STIC-1 and STIC-2? Looks they just have difference denominators. I don't quite get it although there is a paragraph in Sec. 2.4 as below for this. Is there any specific case where an LLM can get low STIC-1 while high STIC-2, or the other way around?\n\n> STIC-1 is primarily concerned with the completion rate of instructions that result in sub-scenarios,\nfocusing on whether instructions are correctly executed. In contrast, STIC-2 assesses the overall completion of the specific instruction task, including the presence of sub-scenarios and their completion\nstatus." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* This paper is overall clear and easy to understand.\n* This proposed evaluation is novel, and the generation ability it benchmarks is not covered by previous metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Existing LLMs' long-context benchmarks rely on understanding tasks. This paper proposes a benchmark targeting specifically on long-context *generation* ability of LLMs. The authors design 4 long-context instruction-following tasks, up to 16 or 32K tokens: (1) Writing Diary for a year; (2) Wrting menu for a year; (3) Design a 100/300-floor skyscraper; (4) Plan an urban layout. As a result, all existing LLMs don't work well on these tasks of the benchmark." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Some of the details are possibly missing or hard to get by readers -- see \"Questions\".\n* In the proposed benchmark, the way to form long content is to pile short answers to many sub-queries, while the sub-tasks are actually independent, to a large extent. For example, given all the demands on one-year dairies, it should be easy for LLMs to write a diary if it is assigned a specific day of the year, while this benchmark just require the LLM generate 365 diaries all at once. In this case, the challenge of this benchmark might majorly be forgetting the instruction under the influence of generated content, instead of keeping the conherence and content interaction among generated long content. That latter should be the one mostly desired by the community." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It seems that the evaluation metrics are designed for these scenarios. If there are new scenario tasks, do we need to update the evaluation metrics?\n2. What other aspects of long text generation with LLM do you think need to be evaluated? It seems that your evaluation is more oriented towards some planning tasks or well-structured text. Is it difficult to evaluate the creation task? For example, it is difficult to design evaluation metrics for novel writing?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Interesting task design, which can evaluate the long text generation ability of large models from a certain perspective\n2. The paper is well written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces LongGenBench, a benchmark for evaluating large language models' (LLMs) ability to generate long-form text while adhering to complex instructions. It features tasks in four scenarios with different instruction types and lengths (16K and 32K tokens). The study finds that even advanced LLMs struggle with long-form generation, particularly as text length increases, highlighting the need for improvements in model architecture and training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The types of task scenarios are relatively limited, and it is impossible to comprehensively evaluate the long text generation capabilities of large models.\n2. The evaluation metrics seem to be customized according to the scenario.\n3. Limited number of models evaluated" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Would an IFEval [1] style setting where the outputs or attributes of the outputs could be verified definitively using code checks be a better option for such a benchmark? For long generations, getting the model outputs to match specific output patterns in the prompt is a challenge unto itself.\n\n[1] https://arxiv.org/abs/2311.07911" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is the first attempt at long-context generation, requiring models to generate a long text that follows a combination of specific instructions as opposed to just answering questions pertaining to long prompts.\n2. The benchmark does uncover a setting that seems to be challenging to SOTA models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a benchmark to evaluate models' strength in long-span generation tasks. It constructs synthetic examples from a fixed set of scenarios and templates in three modes. The resultant instruction measures models' abilities to faithfully follow position-specific, range-specific, and periodic instructions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is very sparse on details regarding the evaluation of correctness. How is matching and parsing done w.r.t. the templates? A model could generate several outputs matching the criteria\n2. The benchmark is limited to a few domains and scenarios, and the paper's contributions seem quite limited overall" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* In L460-468, it is mentioned that there are significant repetitions in long-form generations. Are these repetitions correct with respect to the given instructions, or are they semantically wrong (i.e. violating the given instructions)? The former indicates that it's probably caused by how instructions in the prompt are designed, while the latter means that model struggles at producing long and meaningful generation." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is the first to study long-form generation as opposed to long-context generation. The perspective is novel, interesting, and of practical value to unleash the potential of LLMs for more complicated tasks.\n* Problems in the benchmark are constructed in a sound and intuitive way. While evaluating the quality of long text snippets is usually challenging and complex, the smart design in this paper enables reliable and accurate evaluation for long-form capability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new benchmark for long-form generation where the model-generated content, rather than the input context, is long. Specifically, the model is asked to roll out a detailed description of certain tasks such as diary over a year or floor plan for a skyscraper, subject to a few specific instructions that can be either singular or recurrent. Evaluation metrics include main task completion to test whether generation follows the expected format, as well as the success rate for specific instructions at both micro and macro level. Results demonstrate that the proposed benchmark is much more challenging than needle-in-the-haystack tasks for long context evaluation. Models generally do not perform very well within their claimed context lengths." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One of my major concerns is the clarity of writing in the evaluation part.\n* The definition of STIC-1/STIC-2 isn't quite clear. Using the notation $T_S=(T_{S_1}, T_{S_2}, \\dots)$ in Sec 2.3, my best guess is that STIC-1 means the average success rate over $(T_{S_1}, T_{S_2}, \\dots)$, while STIC-2 counts the entire $T_S$ as successful only if all $(T_{S_1}, T_{S_2}, \\dots)$ are successful, and gives 0 score otherwise.\n* The abbreviation **CR** in Table3 isn't defined anywhere, though I can guess this is likely the Completion Rate in Main Task Completion.\n* The terms \"main task\", \"subtask\", \"instruction task\", \"specific task\" are used in a confusing way. It would be very helpful to unify them with clear definitions, and to associate the terms to symbols like $T_S$ or $T_{S_1}$.\n* Missing x-ticks in Figure 2.\n\nApart that, there are a few technical details that are unclear to me.\n* How do you determine whether the generated text at one specific point satisfies all task requirements? For example, given the generated diary of a particular day, how do you verify that all required activities (e.g. wedding, vacation, and etc.) are covered? I would imagine that a natural language inference (NLI) module should be involved but it's not mentioned in the paper.\n* In Table 3, though the length to be evaluated at is 16K/32K respectively, the actual generation length seems only around half of the max length. How are the 16K/32K defined?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024longgenbench,\ntitle={LongGenBench: Benchmarking Long-Form Generation in Long Context {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3A71qNKWAS},\nnote={under review}\n}" }, "abstract": { "value": "Current benchmarks like ``$\\textit{Needle-in-a-Haystack}$'' ($\\textit{NIAH}$), $\\textit{Ruler}$, and $\\textit{Needlebench}$ focus on models' ability to understand long-context input sequences but fail to capture a critical dimension: the generation of high-quality long-form text. Applications such as design proposals, technical documentation, and creative writing rely on coherent, instruction-following outputs over extended sequences—a challenge that existing benchmarks do not adequately address. To fill this gap, we introduce $\\textit{LongGenBench}$, a novel benchmark designed to rigorously evaluate large language models' (LLMs) ability to generate long text while adhering to complex instructions. Through tasks requiring specific events or constraints within generated text, $\\textit{LongGenBench}$ evaluates model performance across four distinct scenarios, three instruction types, and two generation-lengths (16K and 32K tokens). Our evaluation of ten state-of-the-art LLMs reveals that, despite strong results on $\\textit{Ruler}$, all models struggled with long text generation on $\\textit{LongGenBench}$, particularly as text length increased. This suggests that current LLMs are not yet equipped to meet the demands of real-world, long-form text generation. We open-source $\\textit{LongGenBench}$ to promote comprehensive evaluation and improvement in this critical area, with code and data available at ${anonymousurl}$." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Long context LLMs; Long-form generation; Benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a865cd333adc3d3d651b4b45f1c5eb76abc16791.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/fa5e8cc402fd05efef96deea52e04b175017dc0e.zip" }, "title": { "value": "LongGenBench: Benchmarking Long-Form Generation in Long Context LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3AAXabeZPG
Debiased Medical Report Generation with High-Frequency Amplification
main
Active
Medical Report Generation;Debiased Generation;Visual Bias;Textual Bias;Frequency Bias;Fourier Transform;High-pass Filtering
applications to computer vision, audio, language, and other modalities
3;3;6;6
3;4;3;3
2;3;2;3
2;2;3;2
2;2;3;3
4.5
3.25
2.5
2.25
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The structure of the whole model of proposed approach is not clearly mentioned, like how many HAL layers were inserted. Providing a global view of the model pipeline will make it clearer and easier to follow. \n2. In line 422, HAL is placed after the cross-attention layer. If HAL is after this layer, how does it influence the already computed cross-attention?\n3. In line 201 and Figure 3a, \"classification accuracy improves as the number of diseases increases.\" How should this conclusion be interpreted, given that a higher number of diseases might exacerbate distribution imbalance?\n4. How is the hyperparameter alpha for the high-pass filter set to 8? From Figure 4, performance appears to still improve as alpha increases.\n5. How was the decision made to train for 39 epochs, while the generalization assessment plots the training/validation curve for 20 epochs?\n6. The baseline without the HAL layer is not reported, which could illustrate the influence of the HAL layer on the model." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper provides novel insights into visual and textual biases important for report generation and attempts to reduce them.\n- It offers a thorough examination of these biases and their impact on model performance.\n- The clear problem definition contributes to a more focused discussion in the MRG field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the issue of low frequency (global) visual and textual biases in report generation caused by visual imbalance, textual imbalance, and skewed distribution of disease labels. The authors propose a novel approach called the High-Frequency Amplification Layer (HAL), which uses DFFT on the time axis and feature axis, followed by masking to perform high-pass filtering. This method emphasizes high-frequency components in the feature and may reduce biases. The paper includes experiments on the MIMIC-CXR and IV X-ray datasets to demonstrate improved performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Some experimental settings are unclear; further explanation would improve clarity.\n- Although the implementation of the HAL layer (DFFT on the time and feature axes) is simple, this layer requires further comprehensive evaluation and ablation studies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My questions are listed above in the “weaknesses” section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces an approach for improving the quality of medical report generation, which is a high-impact problem with potential for positively impacting the field of medicine.\n2. The authors demonstrate that their proposed approach HAL leads to performance improvements over several existing methods in this domain." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors conduct an examination of visual and textual biases in medical report generation (MRG) datasets. The analysis find that global patterns, such as normal regions and findings, contribute to visual and textual biases. These biases make MRG models prone to frequency bias, where global patterns are prioritized and local patterns (e.g. abnormal findings) are ignored. In order to mitigate this issue, the authors propose an architectural modification in the form of a high-frequency amplification layer (HAL), which aims to enhance a model’s perceptiveness to fine-grained details. HAL reduces biases, leading to improved performance in MRG tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Inadequate evaluations for demonstrating the utility of HAL**: The key claim of this paper is that the proposed method HAL improves robustness of medical report generation models, which may struggle to learn fine-grained abnormal findings. However, this claim is not sufficiently evaluated in Section 7, and as a result, it is unclear if HAL is improving robustness to these biases. Only aggregate performance values are reported on the MIMIC-CXR and IU datasets.\n\n a. Does HAL improve report generation performance (i.e. NLG and CE scores) when there is a single abnormality in the image? What about multiple abnormalities? Does HAL improve report generation performance when findings are small in size? Does HAL reduce performance on normal cases? All of these questions are critical for determining whether HAL mitigates biases as claimed, but none of these are evaluated. \n\n b. Additionally, in order to demonstrate the usefulness of HAL, Tables 1 and 2 could benefit from an additional ablation using the exact same experimental setup but without the novel HAL layer. \n\n c. In Tables 1 and 2, I recommend that the authors use more recently-developed (standard) report generation metrics for evaluating report quality with respect to factuality, such as RadGraph-F1 [1] or RadCliQ [2].\n\n2. **Inadequate evaluations for demonstrating the existence of visual and textual bias in report generation datasets:** The evaluations in Section 4.1 show that a classifier $f_{Z|X}$ trained on the images demonstrates lower performance when abnormalities occupy small regions. Similarly, a classifier $f_{Z|\\hat{Y}}$ trained on generated reports demonstrates lower performance when there are more normal samples in the training data. These results show that the classifier $f$ picks up on several biases, but how do these experiments relate to the report generation task that is the focus of this work? Do report generation models learn these same biases? It is unclear to me why classification models are the focus of this analysis. \n\n3. **Presentation issues:** There are several presentation issues in this manuscript.\n\n a. First, the notation provided in Section 3.2 is overly convoluted and unclear; for instance, how can the value of an image or text report be set to 0 or 1 (Lines 136-138)? What is meant by positive and negative in this context? This notation also seems unnecessary, since most of this notation is never referenced again in the manuscript. \n\n b. Additionally, section 4.2 is critical to this paper yet does not include adequate implementation details in the main text to understand the goals of the experiments, with most of this material being relegated to the appendix instead. For instance, details on the classification task, classification model, dataset, etc. are not provided in the main text, making it difficult to understand the problem setup. \n\n[1] Delbrouck et al. \"Improving the Factual Correctness of Radiology Report Generation with Semantic Rewards”.” 2022.\n\n[2] Yu et al. “Evaluating progress in automatic chest X-ray radiology report generation.” 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I have listed some questions in the “Weaknesses” section. Below are a few more.\n\nLine 267:\n\nWhy is $T$ the first dimension of $A$? I think it should be $N$ because the $U$ is $N \\times |d|$.\n\nLine 412 and figure 5:\n\nIt is unclear what “neurons” refer to here. Is it the output of the attention layer or MLP layer, or something else?\n\nFigure 4: \nHow is accuracy calculated here?\nSince the loss keeps decreasing for larger $\\alpha$, have the authors considered increasing the $\\alpha$ beyond 8?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The work presents a novel perspective on medical report generation, identifying bias towards low-frequency regions as a challenge for learning good visual representations. The authors introduce the problem with clarity, providing evidence for the correlation between signal frequency and performance. Using Fourier transforms to filter out low-frequency regions from an image is an interesting solution to the problem. The efficacy of the method is backed by empirical results: the model trains better and achieves comparable performance with the SoTA. Overall, there seems to be potential both for mitigating visual and textual bias as an area of research and this specific method for doing so." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper identifies transformer models’ bias towards low-frequency regions of an image as a potential source for their low performance on the medical report generation (MRG) task. It provides evidence for visual and textual bias, where larger abnormal regions and the number of diseases in a study leads to a higher F1 score of the generated report. Since most of an image is normal, models are biased towards classifying images as normal. The paper addresses this issue by proposing a high-frequency amplification layer (HAL) in order to filter out low-frequency regions. It demonstrates that models trained with HAL learn more discriminative representations of diseases, among other benefits, which leads to comparable performance on natural language generation (NLG) and clinical efficacy (CE) metrics to the state-of-the-art (SoTA)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Table 1:\n\nAlthough HAL achieves performance that puts it in the top 3, ultimately its F1 is still more than 8 points lower than the SoTA, which brings into question its advantage over PromptMRG. It would be interesting if the performance gain from HAL composes with gains from other methods. For instance, would RGRG + HAL result in better performance than just RGRG alone? Therefore, the authors should include a comparison with a simple transformer that does not use HAL in order to quantify the effect of HAL on model performance.\n\nTable 2:\n\nAs the authors themselves noted, this comparison is unfair because the baseline models were evaluated zero-shot on IU-Xray while HAL was trained. The authors should provide a fairer comparison, perhaps by also evaluate zero-shot a model with HAL trained on MIMIC-CXR but not IU -Xray.\n\nLine 130-131:\n\n> Each medical image is paired with a corresponding medical report… indicates the size of the vocabulary.\n\nThe notation is weird here. Why does $Y = [y_1, \\cdots, y_t, \\cdots, y_T]$ belong in $\\{0, 1\\}^{|v|}$? What does this set, $\\{0, 1\\}^{|v|}$ refer to?\n\nLine 133-138:\n\nI think it is unnecessary to use math notation here to talk about positive and negative samples. It does not add clarity to the explanation. For example, the notation $|X^{(z)}|$ does not give the reader any more information about how the size of an abnormal region is calculated.\n\nFigure 3a (left) is very hard to read. I cannot figure out which bar has the score of 0.50. Furthermore, although the discussion of textual bias is interesting, it is left unaddressed by the paper as it focuses on visual bias." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. There should be more explanations about why clinical efficacy is lower than the SOTA model. The paper aims at utilizing HAL to capture more abnormal regions, but the CE metric for detecting abnormalities was not been improved. \n2. Can HAL be applied in other existing MRG models? More ablation studies of HAL should be conducted to demonstrate its effectiveness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper defines visual and textual biases, highlighting their impact on MRG model performance.\n2. The empirical analysis of visual and textual biases confirms the presence of each bias and demonstrates the existence of the frequency bias.\n3. The work introduces a high-frequency amplification layer to amplify high-frequency signals, enabling improved detection of abnormal features.\n4. The paper provides a thorough experimental analysis, including ablation studies and qualitative comparisons, to substantiate the effectiveness of the proposed methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper identifies the challenge of visual and textual biases in automated medical report generation, which stems from the overwhelming presence of normal features in both medical images and reports. The authors define visual bias and textual bias, associating these biases with *frequency bias*, where models tend to emphasize low-frequency (normal) signals over high-frequency (abnormal) signals. To counter this, they propose the High-Frequency Amplification Layer (HAL), designed to heighten the model’s sensitivity to abnormal (high-frequency) details, thus enhancing diagnostic accuracy. Validation on MIMIC-CXR and IU X-ray benchmarks shows HAL’s effectiveness through various analyses and demonstrates competitive or superior performance compared to state-of-the-art models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Many of the experiments are conducted to demonstrate the presence of visual and textual biases, but the experimental details are not clearly articulated. For example, how many samples were utilized to analyze the visual bias, and what is the text classifier? Moreover, most of the figures for analysis should have more explanations (e.g., Figure 3, Figure 9 and Figure 10). It is not clear right now.\n2. It is not clear where HAL applied. Were they adopted in all cross-attention layers? \n3. The paper lacks sufficient novelty, as it only combines HAL. However, it does not adequately explain the results of the baseline model. Exactly how HAL works in the final report generation should be further enhanced with examples of generated reports." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024debiased,\ntitle={Debiased Medical Report Generation with High-Frequency Amplification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3AAXabeZPG},\nnote={under review}\n}" }, "abstract": { "value": "In recent years, automated medical report generation (MRG) has gained significant research value for its potential to reduce workload and prevent diagnostic errors. However, generating accurate radiology reports remains challenging due to the prevalence of normal regions in X-ray images and normal descriptions in medical reports. Despite various efforts to address these issues, the definitions of visual bias and textual bias remain unclear and there is still a lack of comprehensive analysis of how these biases affect model behavior. \nIn this work, we rigorously define and conduct an in-depth examination of visual and textual biases inherent in MRG dataset. Our analysis emphasizes that global patterns, such as normal regions and findings, contribute to visual and textual bias. Further, we discuss how these biases make MRG models especially prone to frequency bias, where models tend to prioritize low-frequency signals that capture global patterns, while neglecting high-frequency signals. To debiase the frequency bias, we propose the high-frequency amplification layer (HAL), aimed at enhancing the model's perceptiveness to fine-grained details. Our extensive experiments show that by amplifying high-frequency signals, HAL reduces both visual and textual biases, leading to improved performance in MRG tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Medical Report Generation", "Debiased Generation", "Visual Bias", "Textual Bias", "Frequency Bias", "Fourier Transform", "High-pass Filtering" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/65983378dedc4702c4dd55a45bde72867c2e720a.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Debiased Medical Report Generation with High-Frequency Amplification" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3ANoEa7roV
Systematic Assessment of Tabular Data Synthesis
main
Active
Tabular Data Synthesis;Privacy;Evaluation Metric;Generative Models
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5;6;6;8
4;3;3;4;3;4;4
2;2;2;2;3;3;3
2;2;2;2;3;3;3
3;2;3;4;3;2;3
5.142857
3.571429
2.428571
2.428571
2.857143
0.251259
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How can we evaluate whether the synthesized tabular data has general applicability for downstream tasks?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+. The authors introduced a new fidelity metric based on Wasserstein distance to evaluate diverse data types, addressing the heterogeneity and high dimensionality of tabular data.\n+. The authors introduced the membership disclosure score as a novel privacy metric effectively addresses the limitations of existing privacy metrics and enhances the understanding of privacy risks in data synthesis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper reviews the current state of tabular data synthesis, an approach that balances data utility with privacy. Despite numerous proposed algorithms, a comprehensive comparison of their performance is lacking due to the absence of standardized evaluation metrics. The authors critique existing metrics and propose new ones focusing on fidelity, privacy, and utility. They also introduce a unified tuning objective that enhances the quality of synthetic data across different methods. Extensive evaluations on eight synthesizers and twelve datasets reveal insights that guide future research on privacy-preserving data synthesis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-. The related work section of the paper is relatively weak and should be systematically organized to provide a more comprehensive introduction of relevant studies.\n-. My major concern is whether the synthesized tabular data can maintain usability for more complex downstream applications. In other words, is this usability specific to certain downstream tasks, or can it apply to any downstream task?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "W1-W4" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1. Extensive experiments are conducted to examine the performance of each tabular data synthesizer w.r.t. the three metrics adopted in this paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the performance of tabular data synthesizers w.r.t. three metrics, fidelity, privacy, and utility. The authors conduct extensive experiments to compare SOTA tabular data synthesizers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. The technical depth of this paper is limited. All the metrics are already proposed by existing work and the authors mainly conduct an empirical comparison among tabular data synthesizers. The novelty of this paper is not clear.\n\nW2. This paper discusses tabular data synthesizers. However, it seems to me that the three metrics also fit general data synthesizers. What is the unique feature of tabular data that requires the adoption of the three metrics in model evaluation? If the authors cannot elaborate on the connection between the three metrics and tabular data, the motivation for adopting the three metrics will be unclear.\n\nW3. Since the authors adopt three metrics, the tabular data synthesis task becomes a multi-objective optimization problem. What are the relationships among the three objectives? Do they contradict each other? Is it possible to maximize the model performance w.r.t. all three metrics? The authors should provide an in-depth analysis of these issues.\n\nW4. For fidelity, why only consider the marginal distribution (definition 1)? If we only consider marginal distributions, the complex relationships among columns in a table may be ignored. Note that for tabular data, we may have some (approximate) functional dependencies among columns, which are very important for data integrity and challenging to capture for tabular data synthesizers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "**MDS**\n\n1) Def 2 isn't well defined:\n- H is supposedly sampled \"at random\" from the dataset, but the distribution of this sampling isn't defined.\n- please replace the expectation's subscript with $H \\subset D \\setminus \\{x\\}$, or clarify that the expression $x\\in H$ is an additional requirement to $H \\subset D$.\n- what is $\\mathcal{M}$ in this definition? Can it be _any_ distance? Any particular property it should have?\n\n2) I have several reasons to think that the membership disclosure score (MDS), as defined in Eq. 8, is a very poor choice:\n- It's important to note that MDS, as defined in Eq. 8, is just an estimate, and it gives no formal guarantees. For a privacy metric, this is troublesome, and I highlight one counterexample to its reliability below.\n- Here's an example where MDS suggests high privacy, but where an attack is trivial. Consider a synthesizer $s(\\cdot)$ that maps a point as follows $s: x \\mapsto -x$ . It's trivial to see how an attacker can achieve 100% accuracy. Yet, suppose the nearest neighbor to $x$ in the real dataset is $x+\\varepsilon$. Then MDS would be proportional to $|d(x, s(x)) - d(x, s(x+\\varepsilon))|$, which can be made arbitrarily small with $\\varepsilon$. Hence MDS is a metric which can be tricked, and this makes it unsuitable for any serious privacy application. NOTE: I noted that in Appendix C you acknowledge possible drawbacks, but seem to dismiss them. Unfortunately, these are _critical_ issues even for less contrived synthesizers: for example, privacy metrics are routinely used to ensure that synthesizer implementations are bug-free, and this is certainly something that MDS cannot be trusted to do. \n- MDS' value is (potentially) unbounded, and it's unclear how its value can be matched to the risk of successful attack. Note that the two main ways of measuring privacy in this context both offer this: 1) DP (its parameters can be mathematically matched to the risk of MIA) and of course 2) running a (potentially worst-case) MIA attack directly tells us this.\n- Finally, an important drawback is that MDS doesn't really capture the worst-case: it takes the average (expectation) across multiple runs of the generator. This may be fine, but it should be carefully motivated.\n\nI strongly recommend using a conventional metric (e.g., risk against state of the art MIA attack), which is empirical (similarly to MDS), but provides a better interpretation and it is well-understood by the security community.\nTogether with this MIA metric, I recommend also including a metric with theoretical guarantees; DP parameters $(\\varepsilon, \\delta)$ would be the most standard choice for this.\n\n**Utility**\n\nThe authors introduce \"Machine Learning Affinity\" (MLA) as a metric, which is defined as the average difference across various models of the performance of a model that uses training or synthetic data. This feels incremental: most works on synthetic data generation already look at the difference between the performance (e.g., (Jordon et al., 2021)), and looking at the average across models looks like the natural next-step. I would recommend downtuning the claim that this metric is novel." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Well-written, and evaluates many methods against many datasets\n- Wasserstein distance seems like an appropriate choice for measuring fidelity, and it is adequately justified; similarly, MLA and QueryError seem good measures for utility.\n- The paper offers some interesting takeaways: statistical methods work best for privacy applications, while diffusion models offer good fidelity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper offers an opinionated selection of methods to evaluate synthetic data generation methods for tabular data. In particular, they select metrics to evaluate fidelity (Wasserstein distance), empirical privacy (they introduce the Membership Disclosure Score), and utility (via Machine Learning Affinity and QueryError). The paper goes through a very extensive set of experiments, where it compares synthesizers based on these metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed privacy metric, MDS, has critical flaws that make it unsuitable (and, potentially, misleading) for evaluating privacy. [See below]\n- The use of Wasserstein distance for synthesizer isn't exactly new; for example, Singh et al. [1] used it in their optimization objective. Similarly, as argued below, MLA is incremental.\n- Besides evaluating synthesizers with an opinionated (and well-justified, in some cases) approach, this paper feels quite redundant and it's unclear what is the \"delta\" from prior work. From a quick search, there's dozens of synthetic data evaluation frameworks [2-6], and it's unclear why a new one is needed. I appreciated your comparisons between metrics, provided in the appendix, but the main question is: can you empirically demonstrate that one would be wrong in using one of the prior frameworks, and that they should use yours instead?\n\n[1] Singh Walia, Manhar. \"Synthetic Data Generation Using Wasserstein Conditional Gans With Gradient Penalty (WCGANS-GP).\" (2020).\n\n[2] https://github.com/schneiderkamplab/syntheval\n\n[3] https://github.com/Vicomtech/STDG-evaluation-metrics?tab=readme-ov-file\n\n[4] Qian, Zhaozhi, Rob Davis, and Mihaela van der Schaar. \"Synthcity: a benchmark framework for diverse use cases of tabular synthetic data.\" _Advances in Neural Information Processing Systems_ 36 (2024).\n\n[5] Livieris, Ioannis E., et al. \"An evaluation framework for synthetic data generation models.\" _IFIP International Conference on Artificial Intelligence Applications and Innovations_. Cham: Springer Nature Switzerland, 2024.\n\n[6] McLachlan, Scott, et al. \"Realistic synthetic data generation: The ATEN framework.\" _Biomedical Engineering Systems and Technologies: 11th International Joint Conference, BIOSTEC 2018, Funchal, Madeira, Portugal, January 19–21, 2018, Revised Selected Papers 11_. Springer International Publishing, 2019." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1) The paper is very thorough. Given the extensive appendix, I imagine it has gone through one or more review cycles before. Nevertheless, the authors clearly have discussed in details a lot of very reasonable concerns about their approach, which I quite appreciate. \n2) The paper is quite topical and there aren't that many similar papers out there." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present an evaluation framework called SynMeter to evaluate generative modeling approaches for tabular data across 3 different dimensions: i) fidelity, ii) utility, and iii) privacy. The authors introduce reasonable metrics to evaluate algorithms/datasets along these 3 dimensions. The authors then evaluate several SOTA tabular data generation algorithms using SynMeter." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Presentation: \na) The authors try to cram in too many things into the paper. All figures are too tiny. I recommend moving some of the figures to the appendix if make the remaining ones bigger. \nb) The appendix needs better structure. I'd recommend moving the experiment details ahead of discussion of limitations. \n2) Technical points: \na) The MDS metric is designed for the synthesis algorithm whereas others are for a specific synthetic dataset. This is a major inconsistency. \nb) MDS only captures privacy against MIA attacks. This is not unreasonable but please spend a few lines explaining why you chose to only focus on MIAs?\nc) One of the references [1] is quite similar to this paper in scope but uses slightly different metrics. Given the similarity, I would love to see the authors discuss the key differences between the papers and areas of novelty. \n\n\n\n[1] Qian, Zhaozhi, Rob Davis, and Mihaela van der Schaar. \"Synthcity: a benchmark framework for diverse use cases of tabular synthetic data.\" Advances in Neural Information Processing Systems 36 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Since the tuning objective includes the same metrics used for evaluation, isn’t the observed performance improvement in Table 1 simply an expected result? Can this performance improvement truly be considered a reflection of the tuning objective’s effectiveness?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper effectively identifies and addresses the limitations of existing metrics for fidelity, privacy, and utility, highlighting the need for the proposed metrics.\n2. By introducing fidelity, privacy, and utility as core evaluation dimensions, the paper offers a well-rounded framework for assessing synthetic data quality.\n3. The proposed metrics are thoroughly validated through extensive experiments on a large number of datasets, demonstrating their robustness and applicability in various contexts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper critically examines the limitations of existing evaluation metrics and introduces new metrics for fidelity, privacy, and utility, establishing a comprehensive framework for assessing tabular data synthesis. Additionally, it proposes an integrated tuning objective that consistently optimizes data quality across different synthesizers. The study demonstrates that recent advancements in generative models significantly enhance tabular data synthesis performance while also highlighting key challenges, such as privacy risks and performance disparities among synthesizers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is a lack of experimental comparison between the proposed fidelity and utility metrics and existing metrics. For example, including case studies where the proposed metrics and existing ones yield different evaluations on the same model could have strengthened the paper's claim of improved fidelity and utility assessments. \n2. While the proposed privacy metric is an innovative approach, its reliance on numerous shadow models and synthetic datasets could lead to high computational costs. This complexity might render the metric impractical for large datasets, as the evaluation process could require substantial time and resources, limiting its usability in real-world applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(also see weaknesses)\n\n- Could you come up with an experimental setup and results which would compellingly show why the Wasserstein-based fidelity metric is strictly better than other deployed methods?\n- Why are MIAs against tabular data synthesis not well understood? \n- There indeed does not exist one MIA effective across all synthesizers, but this does not seem like a justification why MIAs are not useful? The ineffectiveness of the MIA might also just reflect little privacy leakage? \n- How does the MDS metric resolve your previously raised concerns regarding MIAs? To my understanding, you are in fact proposing a new MIA, but not evaluating it as such. \n- Could you implement the MIA developed by Houssiau et al, and explain why the MDS metric is superior to compute the MIA performance for records identified by Meeus et al? \n- Could authors clarify why the query error should be part of the utility and not part of the fidelity evaluation?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Comprehensively evaluating synthetic data generators is an important problem, and the paper provides a systematic, multi-dimensional evaluation framework to do so. \n2. The paper includes considers many datasets and synthetic data generators, and comparing them across metrics is valuable for the research domain as a whole.\n3. Proposes a way to pick hyperparameters across a multiple dimensions. \n4. Authors make the framework publicly available as a tool for people generating synthetic data" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new framework, called SynMeter, to assess (tabular) synthetic data generators. They focus on three dimensions: \n\n- Fidelity: \n\t- Authors argue the need for a faithful and universal metric\n\t- They propose a Wasserstein distance-based metric to evaluate complex, high-dimensional tabular data distributions\n- Privacy\n\t- Authors argue syntactic privacy scores to not be adequate\n\t- Authors argue that existing MIAs are ineffective, as they are not well understood and no MIA is effective against all synthesizers. \n\t- They propose a new metric called membership disclosure. \n- Utility\n\t- Authors state that the traditionally used ML efficacy is not adequate, as they argue that there is no consensus on which evaluator should be used. \n\t\t○ They propose two new metrics: ML affinity and query error. \n\nThe paper then includes a holistic tuning objective as a combination of all metrics, to be used for hyperparameter selection. \n\nFinally, the paper includes comprehensive experiments evaluating all metric across datasets and generators." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I understand the need for a holistic and widely agreed upon evaluation framework for synthetic data generators, as a reader, I am not convinced that the metrics proposed by the authors are novel, or particularly better than previously proposed ones. I elaborate on each of the dimensions: \n\n**1. Fidelity.**\n\nWhile I find the notion of using Wasserstein distance to compute fidelity interesting, I remain to be convinced why this would be better than existing methods. \n- Could you come up with an experimental setup and results which would compellingly show why the Wasserstein-based fidelity metric is strictly better than other deployed methods?\n\n**2. Privacy.**\n\n I agree with the authors on the shortcomings of syntactic metrics, and like the example given for DCR. However:\n- I do not follow the arguments made for why MIAs are not sufficient. \n\t- Why are MIAs against tabular data synthesis not well understood? \n\t- There indeed does not exist one MIA effective across all synthesizers, but this does not seem like a justification why MIAs are not useful? The ineffectiveness of the MIA might also just reflect limited privacy leakage? \n- I do not understand what the difference is between the MDS metric and an MIA. If I understand it correctly, you are building a shadow model setup to then compute an MIA scoring function (which you then not evaluate as an MIA). You then pick the record for which you get the best distinction for this scoring function. To me this basically comes down to compute MIA performance for all records, and use the highest MIA performance as the privacy metric. \n\t- How does this resolve your previously raised concerns regarding MIAs? \n\t- Moreover, with this, it is not clear whether this is the state-of-the-art MIA. \n- Finally, in this entire discussion, I believe authors fail to mention (and implement) important related work. Houssiau et al [1] propose a new MIA which beats the one proposed by Stadler et al, and Meeus et al [2] propose a principled way to identify most at-risk records. \n\n**3. Utility.**\n\nI agree with the authors that there is no consensus in the literature on which metric should be used to evaluate the utility of the synthetic data. My thoughts:\n\t- While the exact formulation of the MLA score is, at least to my knowledge, new, I believe its novelty to be very limited. For instance, Stadler et al (in Sec. 6.3) measure utility as a decrease in ML accuracy of a model trained on real compared to a model trained on synthetic data. The only difference with the MLA metric would be the averaging across ML models and the normalization. \n\t- Similarly, the query error seems very similar to the k-way marginals fidelity approach, which has also been studied in for instance Annamalai et al. [3] \n\t- Could authors clarify why the query error should be part of the utility and not part of the fidelity evaluation? \n\n**References**\n\n[1] Houssiau, F., Jordon, J., Cohen, S. N., Daniel, O., Elliott, A., Geddes, J., ... & Szpruch, L. (2022). Tapas: a toolbox for adversarial privacy auditing of synthetic data. arXiv preprint arXiv:2211.06550.\n\n[2] Meeus, M., Guepin, F., Creţu, A. M., & de Montjoye, Y. A. (2023, September). Achilles’ heels: vulnerable record identification in synthetic data publishing. In European Symposium on Research in Computer Security (pp. 380-399). Cham: Springer Nature Switzerland.\n\n[3] Annamalai, M. S. M. S., Gadotti, A., & Rocher, L. (2024). A linear reconstruction approach for attribute inference attacks against synthetic data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) How does your work relate to existing surveys on tabular synthetic data?\n\n2) Why is Wasserstein distance an appropriate metric for categorical fields? \n\n3) How do tabular transformers like RealTabFormer compare to the baselines you have evaluated?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper is interesting, I think these kinds of benchmarking studies can be very useful. \n\nThe paper tackles an important problem, there is a lot of interest in generating synthetic tabular data right now. I thought some of the metrics were interesting, particularly query error, which seems to address a common use case for synthetic data. I appreciated that the MDS score was taken as a maximum over x in D, in line with best practices in the membership inference literature. \n\nThe head-to-head comparison between models is very practically useful, and could have real-world impact." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the problem of evaluating tabular synthetic data generation techniques. To this end, it critiques existing metrics for fidelity, privacy, and utility, and proposes new ones. Then, it evaluates these metrics on a set of 12 datasets. The authors find that diffusion models generally outperform other model types in terms of utility and fidelity, whereas statistical methods are better suited for privacy. They also evaluate differentially-private synthesizers and evaluate the effect of DP budget." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I’m a little worried that the level of the technical contribution and the findings may not rise to the level of a top-tier conference. This is particularly true given that there exist other surveys on the quality of synthetic data generators. \n\n1)\tI don’t understand why the Wasserstein metric makes sense as a fidelity metric for categorical variables. You have defined the cost matrix as infinite between classes. So how can you find a meaningful transport map? This doesn’t seem like the right metric for categorical variables. By the way, Wasserstein distance has already been used as a fidelity metric for synthetic data over a metric space (e.g., CTAB-GAN+ (Zhao et al), DoppelGANger (Lin et al)), with total variation distance being used for categorial variables. \n\n2)\tThe paper seems not to mention a number of related works, including:\n\n-\tOn the Inadequacy of Similarity-based Privacy Metrics: Reconstruction Attacks against Truly Anonymous Synthetic Data (Ganev and De Cristofaro)\n-\tOn the Quality of Synthetic Generated Tabular Data (Espinosa and Figueira)\n-\tA universal metric for the evaluation of synthetic tabular data (Chundawat et al)\n\nFor a survey paper, these omissions worried me--I’m concerned that there may be others I’m not aware of. I would definitely want to see a more in-depth literature review. How does your work relate to these, particularly the surveys on synthetic tabular data? What does your survey add to the discussion? \n\n3)\tThe evaluation seems incomplete in terms of baselines. I thought there should at least be a tabular transformer (e.g., RealTabFormer or a successor) in the mix. GReaT is a transformer, but the tokenization is not really tailored to tabular data, and RealTabFormer is (a) more widely used in practice, and (b) not compared against in the GReaT paper. Also, the tables in the evaluation are illegible—it would be nice to make them bigger." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Proposed a set of new evaluation metrics and framework for tabular data synthesis from fidelity, privacy and utility" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024systematic,\ntitle={Systematic Assessment of Tabular Data Synthesis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ANoEa7roV},\nnote={under review}\n}" }, "abstract": { "value": "Data synthesis has been advocated as an important approach for utilizing data while protecting data privacy. In recent years, a plethora of tabular data synthesis algorithms (i.e., synthesizers) have been proposed. A comprehensive understanding of these synthesizers' strengths and weaknesses remains elusive due to the absence of principled evaluation metrics and head-to-head comparisons between state-of-the-art deep generative approaches and statistical methods. In this paper, we examine and critique existing evaluation metrics, and introduce a set of new metrics in terms of fidelity, privacy, and utility to address their limitations. Based on the proposed metrics, we also devise a unified objective for tuning, which can consistently improve the quality of synthetic data for all methods. We conducted extensive evaluations of 8 different types of synthesizers on 12 real-world datasets and identified some interesting findings, which offer new directions for privacy-preserving data synthesis." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Tabular Data Synthesis", "Privacy", "Evaluation Metric", "Generative Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1493fa9bb40e8023cad955b6e1048bdcbf5d120b.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3769be1ad5e564411633f8476d83f8e43a8700b3.zip" }, "title": { "value": "Systematic Assessment of Tabular Data Synthesis" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3AQAUMObuc
Online importance sampling for stochastic gradient optimization
main
Active
SGD;Importance sampling
optimization
3;3;5
4;3;4
2;1;3
1;1;2
3;2;2
3.666667
3.666667
2
1.333333
2.333333
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "This paper has substantial overlaps with the paper [1] on arXiv. For example, Section 3 in this paper is almost the same as Section 3 in [1], Eq. 5 in this paper is the same as (4) in [1], The figure in line 270 is the same as the one in Page 4 of [1], Algorithm 1 in this paper is the same as Algorithm in [1], and so on.\n\nAlthough this paper might be a resubmission on top of [1] by the same authors, it is still quite weird that [1] is not properly cited and compared as a related work since the experimental results in this paper are quite different from those in [1] and the main selling points of these two papers are intrinsically different: [1] highlights the multiple importance sampling method while this paper is purely based on importance sampling with a single distribution. \n\n[1] Salaün, Corentin, Xingchang Huang, Iliyan Georgiev, Niloy J. Mitra, and Gurprit Singh. \"Multiple importance sampling for stochastic gradient estimation.\" arXiv preprint arXiv:2407.15525 (2024)." }, "flag_for_ethics_review": { "value": [ "Yes, Research integrity issues (e.g., plagiarism, dual submission)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Why (6) is only for binary classification tasks (as indicated in Line 205)? Could $J$ be larger than 2? \n- In [1], the authors mention that ``The individual parameter derivatives vary uniquely across the data points, and estimation using a single distribution inevitably requires making a trade-off\" and advocate for the multiple importance sampling (MIS) approach. Could you please comment on this? Could you experimentally compare their MIS-based algorithm with your algorithm?\n\n[1] Salaün, Corentin, Xingchang Huang, Iliyan Georgiev, Niloy J. Mitra, and Gurprit Singh. \"Multiple importance sampling for stochastic gradient estimation.\" arXiv preprint arXiv:2407.15525 (2024)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The idea of using importance sampling weights for data pruning is interesting.\n- The plots in Figure 1 numerically verify that the learned importance sampling weights are somewhat meaningful and provide intuitions of why this method could improve upon uniform sampling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new adaptive method of importance sampling for stochastic gradient estimation in multi-class classification. The sampling weights of this method do not depend on the costly full backpropagation on each data point. The importance sampling weights can also be used for data pruning, which can be further combined with the importance sampling for gradient estimation. The authors conducted experiments on classification tasks to verify the effectiveness of their algorithm compared to SGD with uniform sampling and previous importance sampling methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- A major concern with the importance sampling method proposed in this paper is that it remains \"loss-based\". To be specific, the weight of each data $x$ is proportional to $\\\\|\\frac{\\partial \\mathcal{L}(x)}{\\partial m(x,\\theta)}\\\\|\\_2 = \\sqrt{\\sum_{j=1}^J(s\\_j(x) - y\\_j(x))^2}$, where $s\\_j(x)$ is the predicted probability of data $x$ belongs to class $j$ while $y\\_j(x)$ is the groundtruth. Thus, the importance sampling weight of data $x$ can be viewed as its $\\ell_2$ loss on label prediction. However, it is unclear how this approach relates to the theoretically optimal importance sampling weight based on gradient norms. If the gradient w.r.t. the output does not take the specific form as in the logistic loss, does it still make sense to sample based on the norm of the gradient w.r.t the output?\n- There is no formal convergence analysis of the proposed algorithm. So the algorithm remains heuristic. \n- The experiments in this paper were not repeated with different random seeds, resulting in a lack of error bars on reported values and curves. This makes their experimental results less reliable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you clarify the novelty of your proposed metric and why should it be better than those used in the literature?\n2. Can you add more comparisons in the experiments with more recent works ([2] and a lot more) and also the most simple random reshuffling with a reduced number of epochs?\n3. Can you add more details regarding sampling with or without replacement? It is said within a batch, it is sampled without replacement, so the samples will not repeat. But what about for different batches from the same epoch. To me, it seems like samples can repeat within an epoch, and this can lead to inferior performance compared to those will not repeat within an epoch such as [2]." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Overall, the writing is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose to use the derivative wrt to the network output for importance sampling. They also propose that their method can be used for online data pruning. They demonstrate their performance is stronger than some of the baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are many papers in the literature regarding importance sampling and the authors seem not to know many of them. It is true that the methods proposed in the earlier years have the problem of computation efficiency, but nowadays there are many methods with little overhead.\n\n\n1. The proposed method has limited novelty. While the authors claim that the derivative wrt to the network output is different from what is used in (Katharopoulos&Fleuret,2018), from my understanding, it is pretty similar if not identical, and there are many other works that utilize something similar like the EL2N score in [1]. Even if there is some difference, the novelty seems limited and the changes are not argued or justified. (Like the difference may just be taking the derivative to the pre- or post- activation output)\n2. The experiment comparison is weak. They do not compare with some more recent work like [2]. Additionally, they do not compare with some simple and already used baseline in practice. For example, random shuffling with a reduced number of epochs is often stronger than many of these importance sampling methods. (This is sometimes much stronger than uniform sampling as the samples won’t repeat within an epoch. Also, it is important that the learning rate schedule changes so that the learning rate decays)\n\n\n\n[1] Paul, M., Ganguli, S., & Dziugaite, G. K. (2021). Deep learning on a data diet: Finding important examples early in training. Advances in neural information processing systems, 34, 20596-20607.\n\n[2] Qin, Z., Wang, K., Zheng, Z., Gu, J., Peng, X., Zhou, D., ... & You, Y. InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning. In The Twelfth International Conference on Learning Representations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors evaluated their proposed framework on multiple datasets and different tasks. The results look promising and show some gains compared to the competitors that seem consistent across tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work studies stochastic gradient descent with online importance sampling and data pruning. The authors propose a practical metric that requires little computation and use it both for the importance weights and pruning scores. This metric is updated on the fly during training. The authors then evaluate their framework on multiple tasks such as classification and regression and popular benchmark datasets such as Cifar10/100 and MNIST." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) I found the writing in Sections 3 and 4 to be unclear and somewhat lacking in precision; they would need rewriting and clarifications in my opinion.\nFor example, equation (1) is confusing, as it seems different from the typical loss minimized, which is generally expressed as $\\mathbb{E}(\\mathcal{L}(m(x, \\theta), y))$, where $x,y$ follow a given data generating process. I fail to understand this renormalization and what $p(x,y)$ refers to here: is it the data-generating process or importance sampling weights? Could you explain and define $p(x,y)$.\nBesides, there are some assumptions that are not clearly stated and appear here and there in the text, for example, that the model is Lipschitz (Section 4.1). \n\n2) The proposed metric seems to be the same as the EL2N proposed in Deep Learning on a Data Diet, Paul et al. Could the authors explain the difference?\n\n3) Although the proposed method seems to outperform the other methods consistently, the differences are sometimes very small, and it is difficult to know if we can not attribute it to statistical noise. The authors indicate that they average over 3 runs; it would be interesting to quantify the variability of the results." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This work introduce a novel importance sampling algorithm for machine learning frameworks improving convergence by prioritizing crucial data points through simplified importance functions based on loss derivative with minimal computational overhead." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024online,\ntitle={Online importance sampling for stochastic gradient optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3AQAUMObuc},\nnote={under review}\n}" }, "abstract": { "value": "Machine learning optimization commonly relies on stochastic gradient descent, where the accuracy of gradient estimation is crucial for model performance. Rather than relying on uniform sampling, importance sampling can improve accuracy by focusing on data points that have more significant impact on learning. However, existing methods for importance sampling face challenges with computational efficiency and integration into practical machine learning workflows.\nIn this work, we introduce a novel adaptive metric based on the loss derivative wrt the network output that can be used for both importance sampling and data pruning. Our metric not only enhances gradient accuracy by prioritizing influential data points but also enables effective pruning by identifying and removing data that contributes minimally to training. We propose an efficient adaptive algorithm that leverages this metric with minimal computational overhead. Our evaluations on classification and regression tasks demonstrate improved convergence and reduced training data requirements, validating the efficacy of our approach." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "SGD", "Importance sampling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/89d0fe6f51ebaf471f29eb7d0ed8812f5b27ad87.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Online importance sampling for stochastic gradient optimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3BhZCfJ73Y
Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models
main
Active
Model Pruning;Diffusion Models;Inference Efficiency
generative models
3;5;6;8
4;3;2;4
3;2;2;3
1;2;2;3
2;2;2;3
5.5
3.25
2.5
2
2.25
-0.083624
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The authors utilize a pre-trained sentence transformer as the prompt encoder in their training process. Do the authors have any insights into how the size of the prompt encoder influences the overall performance, as the size of the prompt encoder will affect the models' ability to understand the input prompt? \n\n2. Training diffusion models often incorporate classifier-free guidance. Is the proposed method compatible with training under this manner?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper introduces a novel approach by proposing adaptive prompt-based pruning that routes input prompts to specialized pruned sub-networks based on their characteristics. This represents a difference from conventional static and dynamic pruning methods,\n2. The empirical results training on datasets like CC3M and MS-COCO demonstrate the method’s effectiveness compared to other pruning methods. The results show that the proposed method outperforms other baselines by significantly reducing computational cost while maintaining or improving output quality as measured by metrics like FID, CLIP score, and CMMD score." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces adaptive prompt-based pruning strategy to reduce the computation cost of diffusion model. The proposed approach involves encoding input prompts into architecture embeddings, which are mapped to specialized architecture codes. These codes determine the routing of each prompt to a pruned sub-network. By training a prompt router using a combination of contrastive learning and optimal transport, the proposed method ensures that prompts are dynamically assigned to appropriate sub-networks. The results of the paper demonstrate the reduction in computational cost while maintaining FID and CLIP scores." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The major concern is the empirical evaluation of the proposed method: \n\n1. as stated in the paper, most organizations typically fine-tune pre-trained diffusion models on their target data but evaluate these models on broader benchmarks to demonstrate generalizability. In this study, however, the authors only fine-tune their model on CC3M and MS-COCO and limit their evaluation to the corresponding validation sets. Expanding the evaluation to a common benchmark would better showcase the model’s generalization capabilities. Specifically, demonstrating that the prompt router can handle prompts outside the training distribution would be more convincing.\n\n2. The paper also references other model pruning methods, such as MobileDiffusion[1], SnapFusion[2], and LD-Pruner[3]. However, it does not include quantitative comparisons with these approaches. It would be helpful for the authors to explain why these comparisons were omitted.\n\n3. In efficient inference for stable diffusion, recent papers show that one-step or few-step generation can speed up the generation. This paper does not include comparisons with methods like INSTAFLOW[4], which would have provided valuable insights into how APTP compares with state-of-the-art approaches in rapid generation.\n\n\n\n[1] Zhao, Yang, et al. \"Mobilediffusion: Subsecond text-to-image generation on mobile devices.\" arXiv preprint arXiv:2311.16567 (2023).\n\n[2] Li, Yanyu, et al. \"Snapfusion: Text-to-image diffusion model on mobile devices within two seconds.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[3] Castells, Thibault, et al. \"LD-Pruner: Efficient Pruning of Latent Diffusion Models using Task-Agnostic Insights.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[4] Liu, Xingchao, et al. \"Instaflow: One step is enough for high-quality diffusion-based text-to-image generation.\" The Twelfth International Conference on Learning Representations. 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea behind the paper is technically sound and novel.\n2. The paper is well written. \n3. The authors present some interesting interepretability experiments on expert assignment that aid the proposed concepts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose prompt-based tuning of text-to-image diffusion models, in which different sub-networks within pre-trained models are trained for different prompts/concepts. They authors performe experiments on multiple datasets to show that for a given latency, their model performs comparably to higher latency pretrained models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited experiments - Although I find the proposed ideas novel, I believe that the paper lacks extensive experimentation on \n - different types of architecture - It is currently unknown if the proposed methods is generalizable across architectures (DiT, MMDiT etc). \n - Small datasets - How does the method perform when data is limited?\n - Fine grained concepts - How does their method handle expert assignment when concepts are fine-grained (breeds of different animals)\n2. Comparison to Mixture-of-Experts (MoE) models - How does the proposed method compare to other prompt-conditinal architectures like MoE text-to-image diffusion models? Currently the competitors in Table 1 (a and b) are static structural pruning baselines, but I believe the paper's contribution is prompt-conditional pruning, which demands comparison to prompt-conditional efficient architectures like MoEs like [1].\n3. I am concerned about the 4 and 7 point drop in FID of the proposed method in Table 1. The authors have not presented any trade-off between latency and performance, which would help understand how limiting computational budget affects performance\n\n\n[1] RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths, Xue et al" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Did the authors try their approach on other architectures? even other backbones of Stable Diffusion?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The idea of pruning different parts of the network for each prompt is non-trivial and interesting.\n- Visual results show APTP seems to use various pruning modes, and does not collapse to a single one." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new approach for accelerating the sampling from diffusion models using adaptive pruning denoted as APTP. The method is tailored for a specific scenario where the inference is on a specific known data distribution and the data for it is given (e.g. a company’s private dataset). Using this data, APTP trains a prompt router module that selects which parts of the architecture should be pruned for each given prompt. The selection is from a fixed set of reduced architecture states (denoted as architecture codes). Both the prompt router and arch. codes are trained end-to-end." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**(1) Missing Baselines of Less Diffusion Steps:** My main concern is that the paper does not compare to approaches that perform less diffusion steps. More specifically the following should be considered as baselines:\n\n- *Step Skipping Distillation of Diffusion Models [1,2,3]:* As mentioned in the paper, several methods tackled faster inference of diffusion models using this idea, mostly by knowledge distillation of the model to inference with fewer (between 1-4) steps. These methods should cut the latency by 8-25 times from the original model, while the proposed APTP was shown to cut latency by much less (not even 50% of the original model timing). These approaches also require training as APTP does, and their weights are readily available for many SoTA architectures, e.g. SDXL, SD 1.5, etc. \n\n- *Caching Based Acceleration [4]:* These approaches cache features of the U-Net architecture and reuse them in later steps. Such approaches hold a distinct advantage of being training-free.\n\n- *Less Denoising Steps at Inference Time:* A trivial training-free baseline is to simply sample images with less denoising steps. I wonder how that would compare to the proposed APTP. Can APTP further accelerate such a setting?\n\nAs step skipping approaches became more popular recently, I believe including some of these as baselines is a must for such an approach.\n\n**(2) Quality of Writing (Sec. 3):** Sec. 3 is complicated and difficult to understand. Specifically, I think Sec. 3.2,3.3 are overloaded and could be shortened to a much clearer version. These subsections are the only place in the paper that describe the actual approach of APTP (and what does arch. codes mean), therefore are especially important for the proposed approach.\n\n**(3) Visual Comparisons:** The paper offers a very limited selection of visual comparisons - having only 8 qualitative comparisons to the original baseline, and no such comparisons to previous approaches. Could the authors please supply a larger set of comparisons to the original model and baselines (including a step skipping distillation [1,2,3], caching [4] and less denoising steps). \n\n**(4) Clustering of Pruning Modes:** While this qualitative analysis was promised in the abstract and introduction, it only exists in figures at the last 2 pages of the appendix without any textual description. Given it is mentioned in the abstract I think it should be included as a part of the main manuscript.\n\n**(5) Limited Setting and Empirical Results:** Unlike other approaches, the proposed method is limited to a specific data distribution. Although the task is much more confined, the reduction in latency is not substantial: To keep performance comparable to the original model in terms of FID or CLIP score, APTP can only reduce 20% of the latency (Tab.1). \n\n[1] Liu, Xingchao, et al. \"Instaflow: One step is enough for high-quality diffusion-based text-to-image generation.\" The Twelfth International Conference on Learning Representations. 2023.\n\n[2] Sauer, Axel, et al. \"Adversarial diffusion distillation.\" European Conference on Computer Vision. Springer, Cham, 2025.\n\n[3] Meng, Chenlin, et al. \"On distillation of guided diffusion models.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\n\n[4] Ma, Xinyin, Gongfan Fang, and Xinchao Wang. \"Deepcache: Accelerating diffusion models for free.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* I would be curious to know how big the router-model itself is (in terms of parameters and memory footprint) and, by extension, how does the size of the router model affect the maximum batch size on an A100 GPU?\n* Did you do any additional experiments concerning varying resolutions and aspect ratios and their impact on the pruned-image quality?\n* Did you try applying this technique to transformer-based image-generation models like Pix-Art-Alpha / Sigma, do you see any major hurdles regarding the switch from ConvNets to Transformers?\n* How specific is the APTP-model to its training data? How do out-of-distribution prompts impact the quality of the generations?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Good Illustrations and concise explanation\n* The core idea is an interesting combination of known concepts in a new, but highly relevant setup.\n* A good amount of comparison to other methods using multiple datasets with different image-types (COCO being primarily real-world images and CC3M being more diverse)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a mixture-of-expert-esque strategy for efficiently, which they coin Adaptive Prompt Tailored Pruning (APTP). The methods combine the benefits of dynamic and static pruning methods and archives good generative metrics while decreasing the computational cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* While FID, CLIP-Score and CMMD alongside the visual examples provide a good overview, I, personally, would have preferred some human-user study (which I see is a lot of effort and for this reason, would not ask from the authors). As an alternative substitute, I propose compute the pick-scores [Kirstain et al. 2023] against the pruning metrics similar to [Pernias et al 2024] on a set of diverse prompts like Partiprompts could provide additional, easily interpretable evidence of this methods' superiority in terms of generation quality.\n\nKirstain et al. 2023 https://proceedings.neurips.cc/paper_files/paper/2023/hash/73aacd8b3b05b4b503d58310b523553c-Abstract-Conference.html\n[Pernias et al. 2024] https://proceedings.neurips.cc/paper_files/paper/2023/hash/73aacd8b3b05b4b503d58310b523553c-Abstract-Conference.html" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a prompt-based pruning method for Text-to-Image diffusion Models, which prunes a pretrained model for a target task into a set of specialized efficient models for different categories of input prompts, given a target compute budget." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024not,\ntitle={Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3BhZCfJ73Y},\nnote={under review}\n}" }, "abstract": { "value": "Text-to-image (T2I) diffusion models have demonstrated impressive image generation capabilities. Still, their computational intensity prohibits resource-constrained organizations from deploying T2I models after fine-tuning them on their internal *target* data. While pruning techniques offer a potential solution to reduce the computational burden of T2I models, static pruning methods use the same pruned model for all input prompts, overlooking the varying capacity requirements of different prompts. Dynamic pruning addresses this issue by utilizing a separate sub-network for each prompt, but it prevents batch parallelism on GPUs. To overcome these limitations, we introduce Adaptive Prompt-Tailored Pruning (APTP), a novel prompt-based pruning method designed for T2I diffusion models. Central to our approach is a *prompt router* model, which learns to determine the required capacity for an input text prompt and routes it to an architecture code, given a total desired compute budget for prompts. Each architecture code represents a specialized model tailored to the prompts assigned to it, and the number of codes is a hyperparameter. We train the prompt router and architecture codes using contrastive learning, ensuring that similar prompts are mapped to nearby codes. Further, we employ optimal transport to prevent the codes from collapsing into a single one. We demonstrate APTP's effectiveness by pruning Stable Diffusion (SD) V2.1 using CC3M and COCO as *target* datasets. APTP outperforms the single-model pruning baselines in terms of FID, CLIP, and CMMD scores. Our analysis of the clusters learned by APTP reveals they are semantically meaningful. We also show that APTP can automatically discover previously empirically found challenging prompts for SD, *e.g.,* prompts for generating text images, assigning them to higher capacity codes." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Model Pruning", "Diffusion Models", "Inference Efficiency" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fae55d05a8cf9a2535f855c4b04da54394c51551.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3BoCwZFRJX
LINA: An LLM-driven Neuro-Symbolic Approach for Faithful Logical Reasoning
main
Active
Large Language Models;Logical Reasoning;Neuro-Symbolic Approach;Hypothetical-Deductive Reasoning
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
3;3;5;6
4;4;4;3
1;2;2;4
1;2;2;3
2;3;3;3
4.25
3.75
2.25
2
2.75
-0.777778
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How well the LLM-powered deductive logic engine performs on standard logical deduction problems.\n2. The reported accuracy for ReClorTeam (GPT-4-0613) on the ReClor leaderboard is 90.10, which is notably different from the numbers presented in this paper. What may cause the difference?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "**Originality: 4/5**\n\nA closely related work, SatLM, uses the Z3 solver as its logical reasoning backbone, whereas LINA leverages an LLM-prompt-based approach. While both frameworks share a similar conceptual foundation, LINA’s LLM-based reasoning backbone is more adaptable to loosely defined questions, enabling it to outperform the more rigid solver approach. This novel application of an LLM-driven deductive logic engine enhances generalizability.\n\n**Quality: 3.5/5**\n\nPros: The authors provide both theoretical proofs on complexity and robust experimental results across multiple benchmarks. One question that arises is how well the LLM-powered deductive logic engine performs on standard logical deduction problems.\n\nCons: The reported accuracy for ReClorTeam (GPT-4-0613) on the ReClor leaderboard is 90.10, which is notably different from the numbers presented in this paper.\n\n**Clarity: 3.5/5**\n\nPros: Figure 1 effectively clarifies the pipeline, and the appendix, which includes the actual prompts, further aids understanding.\n\nCons: Figure 2 is challenging to interpret without sufficient context, and it’s unclear why the Chain-of-Thought (CoT) approach does not explore additional steps.\n\n**Significance**\n\nThis work is of the interest for both neural symbolic community and NLP community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose LINA, a framework that decomposes the reasoning steps for complex questions using four main components: (1) an LLM-based logic extractor, (2) an LLM-based query extractor, (3) an LLM-powered logic deducer, and (4) a core algorithm that integrates context and derived results to analyze the correctness of the underlying answers. They also provide theoretical analysis of LINA’s properties and complexity. Experimental results demonstrate that LINA significantly improves performance on benchmarks requiring multi-step reasoning, outperforming existing methods like Chain-of-Thought (CoT) by a substantial margin." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As shown in strength." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) The authors propose an agentic framework that utilizes formal logic to enhance LLMs. Could a broader comparison with other relevant approaches (in addition to LINC) [1-4] be considered to provide a more comprehensive evaluation?\n\n\n2) LLMs are generally stronger in processing natural language compared to formal logic. Could the authors clarify the advantages they see in converting logical reasoning tasks from natural language into Propositional or First-order Logic for LLM-based reasoning? If this conversion strategy offers benefits, might it be more effective to prompt LLMs with Chain-of-Thought reasoning including Propositional or First-order Logic?\n\n\n3) The authors introduce an agentic framework for symbolic reasoning without an external solver. Could they explain the rationale behind this choice in more detail? If the concern is that formal logic generated by LLMs may be unreliable for external solvers, how does the proposed framework address this issue? Additionally, since the agentic approach relies on a sufficiently capable base model for sub-task management, would this framework extend well to smaller models (such as 7-8B parameters)?\n\n\n4) Given that LLMs can struggle with self-bias [5], could the authors discuss any potential limitations in having the same LLM serve as both the deductive reasoner and supervisor/judge? Are there mechanisms in place to help mitigate self-bias and enhance the model's verification process?\n\n\n5) One challenge with deduction using formal logic can be the restricted scope, especially if the required deduction rules are not explicitly included as known information. Could the authors share any strategies to address this challenge? Additionally, do they see any potential for extending this formal logic framework to reasoning tasks that require broader expressiveness, such as math reasoning, coding, and question answering?\n\n\nReference:\n\n[1] Pan, Liangming, et al. \"Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning.\" The 2023 Conference on Empirical Methods in Natural Language Processing.\n\n[2] Yang, Sen, et al. \"Neuro-symbolic integration brings causal and reliable reasoning proofs.\" arXiv preprint arXiv:2311.09802 (2023).\n\n[3] Xu, Fangzhi, et al. \"Symbol-LLM: Towards foundational symbol-centric interface for large language models.\" arXiv preprint arXiv:2311.09278 (2023).\n\n[4] Xu, Jundong, et al. \"Faithful Logical Reasoning via Symbolic Chain-of-Thought.\" arXiv preprint arXiv:2405.18357 (2024).\n\n[5] Huang, Jie, et al. \"Large Language Models Cannot Self-Correct Reasoning Yet.\" The Twelfth International Conference on Learning Representations, 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Improving LLM-based reasoning with neuro-symbolic integration is a good research problem. The writing is well-structured and clear. Empirical results are given with details. Code and data are provided for reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces LINA, a neuro-symbolic approach designed to enhance the logical reasoning abilities of LLMs. LINA implements a hypothetical-deductive reasoning paradigm by enabling LLMs to autonomously manage logical reasoning without external solvers. It extracts propositional logic from natural language, and performs deductive logical reasoning. Empirical results show LINA outperforms existing methods, including LINC and other prompting techniques." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The core concept of the proposed approach is an agentic framework equipped with formal logic, which is relatively common. The advantages of translating natural language into formal logic and using LLMs for reasoning remain ambiguous. The effectiveness of the agentic framework is influenced by the capabilities of base model and potential self-bias. The application scope of the method is limited." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1) Why did you choose to use the train set of FOLIO while using the validation set for other datasets? Is there a specific reason for this decision? Most prior work (e.g., Logic-LM) typically evaluates the test set of FOLIO, so it would be helpful to clarify the rationale behind this choice.\n\n(2) Could you provide more details about how the hypothesis is generated? Additionally, could you elaborate on how the Reasoning Process Judgment is integrated into the framework? It appears in Figure 1 but is not included in Algorithm 1, which causes some confusion. Providing more information on this would make the methodology easier to follow for readers.\n\n(3) Do you have quantitative data to support your claim?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(1)\tThe framework claims to effectively address the issue of poor generalization to different question formats and the problem of information loss when using only symbolic language by combining symbolic and natural language.\n\n(2)\tThe main experiment shows that the method surpasses the baselines across five datasets using GPT-3.5 and GPT-4o." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a framework called LINA to address the generalization problem and information loss found in existing methods. The framework consists of two main components: an Information Extraction Module and a Symbolic Reasoning Module. First, the Information Extraction Module condenses and translates the reasoning question into a symbolic format. Then, the Symbolic Reasoning Module iteratively performs one-step deductive reasoning, utilizing both symbolic and natural language, with a judgment step to verify the correctness of each reasoning step. By leveraging GPT-3.5 and GPT-4o, the paper demonstrates that LINA outperforms the baselines across five datasets. Additionally, the paper includes comparisons to ToT and SatLM, along with an ablation study and a case study." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1)\tThe level of innovation in this work raises some concerns. To the best of my knowledge, some previous work (SymbCoT [3]), also addresses the issue of information loss by leveraging both natural language information and first-order logic (FOL). The key difference is that SymbCoT conducts reasoning and verification as a linear process, whereas this work transforms the process into an iterative one. In summary, it appears that this work mainly modifies the linear process from SymbCoT into an iterative framework, which limits its novelty. From my perspective, the primary innovation lies in this framework’s adaptability to a wide range of question formats, a feature the previous work lacks.\n\n(2)\tThe Information Extraction Module requires further clarification. How is the context classified? Additionally, how do you determine the \"ease of translation\"? Upon reviewing the context classification prompt provided in the appendix, it seems more focused on simplifying logical statements rather than classification. Please clarify if my understanding is incorrect.\n\n(3)\tIn Section 4.2, you explain that the context is first classified into lengthy text and non-lengthy text, with the lengthy text then being condensed into shorter sentences. These condensed texts are further classified based on their ease of translation. Further details are needed to understand this process. For example, how many classes are used in this step? Which classes will be translated, and which will not? This is important because the paper claims an advantage in using both symbolic and natural language, so it is crucial to understand what content is represented in symbolic language and what remains in natural language.\n\n(4)\tThe Reasoning Module lacks crucial details. Firstly, there is no explanation of how the deductive process works and how information LS, NL, and H interact to reach the reasoning conclusion C. Secondly, when performing the Check() operation, is it checking for errors in the reasoning process, or is it verifying whether the information contradicts or supports the hypothesis? Third, you mention that if an error occurs, the supervisor may adjust C or reset C = H. How is this step implemented exactly? This is not explained in the main text nor in Algorithm 1, and more details are needed to help readers understand how the reasoning module operates.\n\n(5)\tThe paper lacks a detailed analysis, which hinders the reader's understanding and the transparency of the framework. For example, the paper's main claim is that it addresses information loss and improves the framework's generalizability, but there is a lack of relevant analysis to support this claim. Besides, prior work in this stream (e.g., LINC [1], Logic-LM [2], SymbCoT [3]) typically includes an analysis of accuracy per necessary proof depth in ProofWriter. Including this type of analysis would be valuable, as it could demonstrate how robust your method is with respect to increasing reasoning complexity, a common challenge in real-world applications. Furthermore, the paper lacks an error analysis, which would provide a clearer understanding of where failures occur and improve confidence in the proposed framework.\n\n(6)\tThe analysis section also lacks some details. In Section 5.4, when you state that the LLM cannot generate effective z3 solver code or easily adapt for execution, does this mean that the rule-based solver completely fails to execute the problem, or can it execute but fail to reach the correct answer? Do you have quantitative data, such as execution rates, to back up this observation?\n\nReference:\n[1] LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers (Olausson et al., EMNLP 2023)\n[2] Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning (Pan et al., EMNLP 2023)\n[3] Faithful Logical Reasoning via Symbolic Chain-of-Thought. (Xu et al., ACL 2024)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "See below" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a pure prompt-based framework that solves reasoning problems, namely LINA. The framework first prompts LLM to convert problem into formal logic representation with natural language information; then it solves the problem as a deductive reasoning task by iteratively prompt the reasoner for deducing new facts and the supervisor for verification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## Novelty\n\nThe proposed method is a pure prompt-based framework with a straightforward design. The specific design of performing reasoning without using external tools has been studied in several prior works [1,2]. That said the novelty of this work is minor.\n\n\n## Quality\n\nThe idea of \"removing the tool usage yields better performance for deductive reasoning\" is poorly motivated and justified\n\nL48 \"First, the process of converting logical problems into formal expressions leads to information loss\"\n- This is true for problems without FOL groundtruth, such as ReClor and LogiQA which are evaluated in the experiments.\n- However, **these problems are not meant to be solved with the traditional formal logic method in the first place**, the prior work such as SatLM and LogicLM mostly focuses on solving the NLI task with datasets that come with groundtruth FOL annotations. Also note that ReClor and LogiQA contain not only deductive reasoning but also other reasoning tasks that cannot be characterized by FOL.\n- That said, criticizing translation leads to information loss is fine, but it hardly motivates the approach proposed here if it is meant to solve problems that already fall outside of the formal logic bucket.\n\nL78 \"Second, the reliance on specific external tools results in poor generalization of these methods, limiting them to solving only certain types of problems, such as FOL propositional inference problems or satisfiability problems\"\n- This statement is problematic. Many works show tool usage increases rather than decreases the capability of LLMs in solving formal reasoning problems.\n- Formal tools such as Prover9 and Z3 can be used for not only propositional logic but also first-order logic. And sat problem is a very generic problem setting where many reasoning problems can be converted into a sat problem, and being able to solve sat problem should be not considered as a disadvantage.\n- That said, the authors should motivate their work properly.\n\n\nNot every reasoning problem in ReClor and LogiQA can be formed into deductive reasoning:\n- The authors propose to solve all reasoning problems with deductive reasoning. This is simply inappropriate for many of the problems in ReClor and LogiQA. For example, ReClor contains questions like \"which of the following most challenges/supports/aligns with the argument in the context?\" and \"which of the following arguments shares the same reasoning pattern as that in the context\", such questions do not fit into any formal logic categories and certainly cannot be solved with deductive reasoning.\n\n\nThe experiment setting misses many details and is potentially problematic:\n- It's unclear how many ICL examples are used for GPT CoT baselines. However, an accuracy of 76 with GPT-4o on ReClor seems too bad to be true. As a comparison, [3] shows that with just a few ICL examples, GPT-3.5 can achieve about 60% accuracy and GPT-4 can achieve above 90% accuracy, which aligns much better with the scores reported in the public leaderboard.\n- As mentioned above, including methods like LINC in ReClor and LogiQA benchmarks is not sensible, as these methods are designed for NLI task and not these benchmarks.\n\t\n\n\n## Clarity\n\nThe paper is generally easy to follow.\n\n\n## Significance\n\nWhile I agree with the authors that moving beyond standard NLI tasks into more \"in the wild\" reasoning problems such as that in ReClor is an interesting and important direction, it cannot justify the pure prompt-based design, as it effectively rendering the approach into yet another fancy CoT method that could hallucinate during its reasoning. From a pure performance perspective, the significance of this work is still questionable as the results from the baseline approach are too bad to be true. That said, the significance is also minor.\n\n\n[1] Zhu, Zhaocheng, et al. \"Large language models can learn rules.\" arXiv preprint arXiv:2310.07064 (2023).\n\n[2] Feng, Jiazhan, et al. \"Language models can be logical solvers.\" arXiv preprint arXiv:2311.06158 (2023).\n\n[3] Yang, Yuan, et al. \"Can LLMs Reason in the Wild with Programs?.\" arXiv preprint arXiv:2406.13764 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024lina,\ntitle={{LINA}: An {LLM}-driven Neuro-Symbolic Approach for Faithful Logical Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3BoCwZFRJX},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) have exhibited remarkable potential across a wide array of reasoning tasks, including logical reasoning. Although massive efforts have been made to empower the logical reasoning ability of LLMs via external logical symbolic solvers, crucial challenges of the poor generalization ability to questions with different features and inevitable question information loss of symbolic solver-driven approaches remain unresolved. To mitigate these issues, we introduce **LINA**, a LLM-driven neuro-symbolic approach for faithful logical reasoning. By enabling an LLM to autonomously perform the transition from propositional logic extraction to sophisticated logical reasoning, LINA not only bolsters the resilience of the reasoning process but also eliminates the dependency on external solvers. Additionally, through its adoption of a hypothetical-deductive reasoning paradigm, LINA effectively circumvents the expansive search space challenge that plagues traditional forward reasoning methods. Empirical evaluations demonstrate that LINA substantially outperforms both established propositional logic frameworks and conventional prompting techniques across a spectrum of five logical reasoning tasks. Specifically, LINA achieves an improvement of 24.34% over LINC on the FOLIO dataset, while also surpassing prompting strategies like CoT and CoT-SC by up to 24.02%. Our code is available at https://anonymous.4open.science/r/nshy-4148/." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Logical Reasoning", "Neuro-Symbolic Approach", "Hypothetical-Deductive Reasoning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e0af48b8cfbf91fd34ecf218d8d4003aa8e2b008.pdf" }, "presentation": null, "primary_area": { "value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "LINA: An LLM-driven Neuro-Symbolic Approach for Faithful Logical Reasoning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3By4N0GAdt
Learning to Animate Images from A Few Videos to Portray Delicate Human Actions
main
Active
Image Animation;Video Generation;Few-shot
generative models
5;5;6;6
3;3;3;2
2;3;3;3
2;3;3;2
2;2;3;3
5.5
2.75
2.75
2.5
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to my questions above. Overall, I feel the method shows some promising results but more evaluations might be required to assess the approach." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. I think few-shot animation of human actors is an important direction and the proposed approach offers meaningful contributions to this field.\n\n2. The paper compares a range of baselines and demonstrates notable improvements in terms of the alignment to the reference image and smoothness of the actions. \n\n3. The paper is overall well organized. Ablation studies are shown for each component." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the problem of animating human actors in the few-shot setting. To address the challenges, the authors propose the FLASH framework, which mainly consists of a Motion Alignment Module and a Detail Enhancement Decoder. The effectiveness of the method is tested on 12 atomic human actions selected from HAA500." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. For the proposed approach, I wonder if it also works on general videos beyond human actors? If not, what is the specific design in the model tailored for human actors? I think it would add more value to the paper if it also works well on general motions. Since many of the baselines are designed for general motions, I think showing its generality is important and also makes the comparison more fair.\n\n2. Experiments: a) The visual quality is still limited, in the examples shown, there’s still clear object flickering and motion jittering. b) It is only tested on HAA500, with 12 actions. I think that more tests on different datasets are required to see the effectiveness of the method. As for the metrics in Table 1, some metrics are worse than some baselines and I think more explanation would be helpful.\n\n3. If given more videos, would the method still outperforms the baselines? Showing a figure that illustrates the improvements relative to the number of input videos would be helpful. This would make it clearer to understand the range in which the method outperforms others." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The network details and training procedures are rather unclear. Please refer to the questions listed in the Weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Although video generation models have achieved impressive advances, they rely on extensive training sets and computation resources. However, human video generation, especially with large body movements, is still challenging. The idea of exploiting very few video sequences to train video generation models specifically for humans is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of generating videos with static images as input. Instead of having an extremely large dataset, they trained a model over a few videos(16 videos) with similar motions or actions. The key idea is to extract consistent motion patterns or features from those few videos; afterward, the human video is generated by enforcing the motion patterns as well as the appearance from the input image. They trained the models on HAA500 dataset containing several different categories of motion actions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "First, some of the technical details and network structure are not very clear. For example, 1) in Fig. 2 of the system overview, the text prompt is injected into the Unet, but from the caption of the figure and also from the description in the method part, it is still not clear how to get the text prompt fed into the Unet, and do we need to have a text-encoder, cross attention as in stable diffusion? 2) What is the encoder and structure in Fig.2? Do we train the entire network together with the encoder and decoder as well as Unet, or do we need first to train the encoder and decoder? 3) When selecting the 16 videos for training, what are the selection criteria? Is the selection done manually or automatically? 4) From the given network structure and description in the method part, it is still unclear how to encode the first/reference frame image into the network. \n\nSecond, from my understanding, a model will be trained for each text prompt. This is rather inefficient. \n\nThird, some images and visual results need to be included: 1) since augmentation plays an important role in the overall design, it is better to show some augmented images. 2) Without any video included in the supp, it is rather difficult to find out whether the temporal consistency issue exists." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could you please provides failure cases of the techniques and point out certain constraints of the work. I think this part is missing." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper investigates the compelling and essential task of learning to animate human actions from a limited number of videos.\n\nThe problem is clearly explained and well-motivated, emphasizing the need to learn generalizable motion patterns without overfitting to the appearance of training videos, while ensuring smooth transitions from the initial reference image.\n\nTo address these key requirements, the authors introduce a Motion Alignment Module that aligns motion features and inter-frame correspondence relations. Additionally, to enhance transition smoothness from the reference image, FLASH employs a Detail Enhancement Decoder." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes FLASH (Few-shot Learning to Animate and Steer Humans), which improves motion generalization\nby aligning motion features and inter-frame correspondence relations between videos with different appearances. Even with limited training data, this approach minimizes the overfitting issue to visual appearances and enhances the generalization of learned motion patterns. Experiments demonstrate that FLASH effectively animates images with unseen human or scene appearances into specified actions while maintaining smooth transitions from the reference image." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The Motion Alignment Module requires generating highly augmented versions of videos to learn motion patterns across varied appearances. This process depends heavily on the quality and effectiveness of augmentations, which, if inadequate, could fail to capture essential variances in motion or introduce irrelevant features." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How does the strongly augmented video mechanism work? What exactly is the \"random color adjustment\" here? After performing the \"strong augmentation\", is the video still in the data domain?\n- Is the proposed model trained from scratch or initialized from some pre-trained model? If relying on a pre-trained model, what model does it exactly fine-tune?\n- What is the requirement for the training data? How much similarity do the videos of the same motion have to share? Do they need to be in a similar view (i.e., front vs side vs back)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The problem studied in this paper is interesting, important, and challenging. Video diffusion models indeed manifest strong hallucinations in human motion, which need to be addressed to facilitate several downstream tasks.\n- The solution of using some data augmentation techniques to solve the data sparsity issue is well-motivated. Based on this motivation, the authors design some reasonable network architecture modifications to learn from those augmented data.\n- Both quantitative and qualitative metrics are shown to have a better comparison between the proposed method and the baselines. Although not beating all baselines quantitatively, the user study shows better results for the proposed method.\n- Several ablation studies have been conducted to show the effectiveness of the proposed components." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the problem of learning rarely-seen human motion information for video diffusion models from sparse video clips. The authors propose a \"motion alignment module\" to solve this challenging problem, where they first conduct pixel-level data augmentation and then encourage the model to reconstruct the video based on the shared motion information. Experiments are conducted with comparisons to baselines to show superiority of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The technical writing of this paper is not clear enough to fully explain their method. For example, it's not clear how the method performs augmentation and whether the augmentation is reasonable. More questions can be found below.\n- Some potential baselines are not compared. For example, MotionDirector trains a motion LoRA to adapt the video diffusion model to rarely seen motions. How will the proposed method compare to those methods?\n- Experiment results are lacking for the comparison of the proposed method and baselines on Internet videos. It is not clear whether the proposed method is also superior on generalizability.\n- The ablation studies do not show a video animation result, so it's hard to tell whether the proposed components indeed help the generation quality. Also no quantitative user study is performed for the ablation study." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning to Animate Images from A Few Videos to Portray Delicate Human Actions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3By4N0GAdt},\nnote={under review}\n}" }, "abstract": { "value": "Despite recent progress, video generative models still struggle to animate human actions from static images, particularly when handling uncommon actions whose training data are limited. In this paper, we investigate the task of learning to animate human actions from a small number of videos---16 or fewer---which is highly valuable in real-world applications like video and movie production. Few-shot learning of generalizable motion patterns while ensuring smooth transitions from the initial reference image is exceedingly challenging. We propose FLASH (Few-shot Learning to Animate and Steer Humans), which improves motion generalization by aligning motion features and inter-frame correspondence relations between videos that share the same motion but have different appearances. This approach minimizes overfitting to visual appearances in the limited training data and enhances the generalization of learned motion patterns. Additionally, FLASH extends the decoder with additional layers to compensate lost details in the latent space, fostering smooth transitions from the reference image. Experiments demonstrate that FLASH effectively animates images with unseen human or scene appearances into specified actions while maintaining smooth transitions from the reference image." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Image Animation", "Video Generation", "Few-shot" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c540d3696b07d0790a15a4d48cfd5a42d248ca9f.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/d9f2e2fc1aa2c8d31fd15e45a994b3b35c7b8469.zip" }, "title": { "value": "Learning to Animate Images from A Few Videos to Portray Delicate Human Actions" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3E8YNv1HjU
Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon
main
Active
memorization;ontologies;language modelling
foundation or frontier models, including LLMs
5;5;6;6
3;3;4;3
2;3;3;3
2;3;3;3
2;3;2;3
5.5
3.25
2.75
2.75
2.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "+ The taxonomy of memorization is purely established based on the property of data, i.e., the number of duplications in the pre-training corpus and the implicit template within the data. However, memorization is also a concept and phenomenon related to model behavior and model behaviour can also be included into the taxonomy as an evidence to classify different types of memorization." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ A new taxonomy for understanding and analyzing the model memorization. \n+ Interesting findings on the dynamics of memorization during the scaling-up of data and model size.\n+ An empirical evaluation of the utility of the taxonomy based on predictability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a taxonomy for model memorization and classifies model memorization into three categories, namely recitation, reconstruction and recollection. The authors identify several data-related or model-related features and test their correlation with model memorization. To verify the effectiveness of the proposed taxonomy, the authors trained three linear regression models for three categories respectively and found that group the regression models attain better performance than the predictors trained on other features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "+ A perplexing part of the taxonomy is the classification of repetitive or incremental sequences following a specific pattern. If the sequence duplicates more than five times, how do we know whether it is truly \"memorized\" or it is reproduced simply because the LLM learns its pattern? \n+ Why do we use more than five times duplication as the decision boundary for recitation and non-recitation. How is the hyper-parameter decided? Using a single threshold actually assumes an equality in difficulty for reciting every sequence." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What insights from previous work on memorization mechanisms support or conflict with these findings?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed memorization taxonomy is intuitive and interesting, drawing parallels with human memorization. This taxonomy is particularly valuable as it provides a structured approach to analyzing what has typically been treated as a uniform phenomenon. \n\nThe analysis methodology is another strong point, featuring a thorough examination of dependencies between features and memorization categories, supported by effective predictive modeling to validate the taxonomy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel taxonomy for memorization in language models, categorizing it into three types: recitation (highly duplicated sequences), reconstruction (predictable templates), and recollection (other memorized sequences). The authors validate their taxonomy through predictive modeling and analysis across model scales and training time, demonstrating how different factors influence each category distinctly. The work provides valuable insights into understanding memorization as a multifaceted phenomenon rather than a uniform behavior." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of this paper boils down to two key issues. First, while the idea of categorizing memorization into three types sounds cool, the paper doesn't dig deep enough to tell us why we should care. Sure, they show that code gets memorized more than text across all categories - but why? And what does this mean for how these models actually work? How different types of memorization contribute to model capabilities. These are the kind of insights that would make the taxonomy actually useful, but they're missing.\n\nIn addition, the experimental setup is not convincing. For example, the experiments are conducted solely on Pythia models without validation of other popular models. And some of the key choices seem pretty arbitrary like picking $k=32$ for their memorization tests or saying \"more than 5 duplicates\" counts as recitation. Why those numbers? What happens if you change them?\n\nOverall, I think the paper lacks insights and the experiments are not very solid." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Can you provide empirical justification for this specific cutoff? How sensitive are your results to this choice?\n2) Could you include statistical significance tests for the reported trends across model sizes?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper exhibits several notable strengths that demonstrate its potential value to the field. The proposed taxonomy of memorization provides an intuitive and practical framework for understanding different types of memorization in LLMs. The extensive experimental validation across model scales and training time points offers valuable insights into how memorization behavior evolves. The authors' approach to validating their taxonomy through predictive modeling and dependency analysis shows methodological rigor and provides empirical support for their theoretical framework." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This comprehensive paper presents a novel taxonomic analysis of memorization in LLMs, breaking it down into three distinct categories: recitation (highly duplicated sequences), reconstruction (inherently predictable sequences), and recollection (neither duplicated nor predictable). Through extensive experimentation with the Pythia models ranging from 70M to 12B parameters, the authors demonstrate that different types of memorization exhibit distinct patterns and dependencies on factors like sequence duplication, model size, and training time. They validate their taxonomy by showing its effectiveness in predicting memorization likelihood and reveal that recollection grows disproportionately with model size and training time. The work provides valuable insights into how different factors influence memorization depending on the taxonomic category." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The template detection approach appears oversimplified. For instance, only basic patterns of \"repeating\" and \"incrementing\" sequences are considered, potentially missing more complex templates. The duplication counting relies on exact matches without accounting for semantic similarity or near-duplicates (e.g. slightly modified code or text passages).\n2) The paper insufficiently compares its taxonomy against existing memorization frameworks. For example, the relationship between these categories and counterfactual memorization, which is mentioned but not analyzed, deserves exploration. The advantages of this taxonomy over other approaches to studying memorization are not quantitatively demonstrated.\n3) The exact procedure for computing KL divergence in Fig 3 is unclear, and the methodology for computing perplexity scores used throughout the analysis lacks essential details. The robustness of results to different tokenization choices is not evaluated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How would the predictive model perform without continuation perplexity as a feature? This would help assess how much signal the other identified factors provide to predict memorization.\n2. How novel is the introduction of statistically validated taxonomy? Do similar case studies exist in other fields? Beyond the reference to the Simpson paradox, the paper doesn't include references in this area.\n3. Are corpus statistics computed on the prompt, continuation or the full sequence? \n4. What are the characteristics of sequences with high duplicate counts (>100) that don't get memorized? Understanding these cases might provide insight into factors that inhibit memorization despite repeated exposure.\n5. How sensitive are the semantic similarity measurements to the length mismatch between the 64 (or 32)-token samples and the 2049-token sequences they're compared against? \n6. (Less important) Could the sudden increase in reconstruction at 86% of training be related to the \"induction bump\" phenomenon described in the mechanistic interpretability literature? Again, see \"In-context Learning and Induction Heads\", by Olsson et al. for the introduction of the concept.\n\n\nMinor points:\n\nThe Figure 6 would benefit from being vertically compressed without deforming the text.\n\nLikely typo lines 140-142: \"we generate document embeddings for each full sequence using SBERT and count the number of sequences with cosine similarity ≤ 0.8. These sequences are semantically similar but may not be exact token-level duplicates.\" -> I guess it should be \"cosine similarity ≥ 0.8\" instead." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **The paper provides a methodologically sound approach to developing and validating taxonomies in ML research (and beyond).** By grounding qualitative distinctions in statistical analysis (e.g., following the example of Simpson's paradox), it offers a template for studying complex phenomena in ML beyond memorization that could be of interest for the ICLR community. \n- **The taxonomy enables discoveries about memorization dynamics.** For instance, the finding that duplicate count beyond 5 barely affects recitation likelihood challenges simple assumptions about exposure and memorization. The categorical distinctions also help align research directions with specific applications (e.g., privacy vs. copyright concerns). \n- **Analysis of the semantic properties of the sequence.** This type of statistics provides valuable insights into how models can perfectly reproduce sequences through pattern completion rather than pure memorization for the reconstruction category. This distinction, while simple in hindsight, is important for understanding the relationship between memorization and generalization, and the limit of the k-extractability definition of memorization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a taxonomy of memorization in language models, breaking it down into three categories: recitation of highly duplicated sequences, reconstruction of inherently predictable patterns, and recollection of rare sequences. The authors validate their taxonomy through statistical analysis and predictive modeling, showing that different factors influence memorization differently across categories. \n\nThey analyze how memorization patterns evolve with model scale and training time, finding that recollection grows disproportionately with model size. The paper illustrates the practical value of this taxonomy by showing how different categories matter for different applications (privacy, copyright, scientific understanding) and by achieving better predictive performance compared to both a baseline without taxonomy and an optimally partitioned model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **The predictive model's performance is relatively poor** (precision ~0.5, PR AUC ~0.5), despite including continuation perplexity as an input feature. This raises questions about the practical utility of the identified factors in predicting memorization. The heavy reliance on continuation perplexity for every category (see Figure 5.), which is closely related to the definition of k-extractability, makes it difficult to assess the independent predictive power of other factors. \n- **No clear progress in understanding the memorisation of rare sequence.** While the paper identifies recollection of rare sequences as an important phenomenon, particularly as models scale, it provides limited insight into the underlying mechanisms. This gap is particularly notable given the paper's emphasis on understanding different types of memorization.\n- **The presentation lacks clarity at times.** \n\t- When introducing the taxonomy, early concrete examples of each category would significantly improve understanding. \n\t- The paper should also better highlight the distinction between intuitive notions of memorization and the technical definition of k-extractability used in the study. This could help the reader understand why the reconstruction phenomenon (where sequence outside of the training set could be predicted perfectly) fall in the scope of the study of memorization. \n\t- The study could benefit of including reference to a broader set of references such as the study of mechanistic interpretability and training providing more insights on how and when models become able to predict simple sequences. See for instance \"In-context Learning and Induction Heads\", by Olsson et al.\n- **Methodological limitation in the computation of the corpus statistics.** \n\t- The corpus statistics are not broken down into prompt/continuation/full sequence. This could enable the isolation of sequences with a frequent prompt but infrequent continuation, or the opposite for instance. The paper doesn't clearly state which one of the three is used for the corpus statistics.\n\t- If I understood correctly, the semantic similarity measurements are made between sequences of length 64 (or 32) tokens (the memorized/non-memorized samples), and the 2049-token sequences from the Pile. This length mismatch could introduce heavy distortion as even if the small sequence is included in the large sequence, it is not clear that the cosine similarity of their embedding would be similar." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We break memorization down into a taxonomy: recitation of highly duplicated sequences, reconstruction of inherently predictable sequences, and recollection of sequences that are neither." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024recite,\ntitle={Recite, Reconstruct, Recollect: Memorization in {LM}s as a Multifaceted Phenomenon},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3E8YNv1HjU},\nnote={under review}\n}" }, "abstract": { "value": "Memorization in language models is typically treated as a homogenous phenomenon, neglecting the specifics of the memorized data. We instead model memorization as the effect of a set of complex factors that describe each sample and relate it to the model and corpus. To build intuition around these factors, we break memorization down into a taxonomy: recitation of highly duplicated sequences, reconstruction of inherently predictable sequences, and recollection of sequences that are neither. We demonstrate the usefulness of our taxonomy by using it to construct a predictive model for memorization. By analyzing dependencies and inspecting the weights of the predictive model, we find that different factors have different influences on the likelihood of memorization depending on the taxonomic category." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "memorization", "ontologies", "language modelling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e6f4389afbf30374da98ea2d2c7a61b0533d144e.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3ENBquM4b4
Plasticity from Structured Sparsity: Mastering Continual Reinforcement Learning through Fine-grained Network Allocation and Dormant Neuron Exploration
main
Active
Continual reinforcement learning;Policy transfer
reinforcement learning
5;5;5;6
3;4;4;3
3;2;2;3
2;2;2;3
3;2;2;2
5.25
3.5
2.5
2.25
2.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**Regarding Weakness 1**:\n\nQ1.1. Could the authors explicitly compare their task-level sparse prompting approach to that in [1], highlighting key methodological differences, and discuss any potential advantages of their approach over the method in [1]? \n\nQ1.2. Could the authors explain their motivation for sampling the task dictionary from N(0,1) rather than using dictionary learning?\n\nQ1.3. Could the authors clarify how their use of dormant neuron resetting differs from or improves upon the approach in [2], and explain why they consider it a key contribution despite its prior use in continual RL?\n\n**Regarding Weakness 2**:\n\nQ2.1. Could the authors provide clear explanation of the advantages of their \"sensitivity dormant scores\" over previous dormant score definitions, and provide an ablation study comparing their \"sensitivity dormant scores\" to other dormant score metrics?\n\nQ2.2. Could the authors provide theoretical or empirical justification for their key model choices?\n\n**Regarding Weakness 3.1**:\n\nQ3.1. Could the authors explain the discrepancy between their reproduced CoTASP results and those reported in the original paper?\n\nQ3.2. Could the authors provide generation performance results for SSDE for a fair comparison?\n\nQ3.3. Could the authors discuss how the performance gap affects their conclusions about SSDE's effectiveness compared to CoTASP?\n\n**Regarding Weakness 3.2**:\n\nQ3.4. Could the author explain why their method performs inferior to [1] when dormant neuron resetting is not used, and provide additional analysis or experiments to clarify whether their co-allocation strategy offers advantages over the method in [1]?\n\nQ3.5. Could the authors discuss potential reasons for the performance difference when dormant neuron resetting is not used and its implications for their method's effectiveness?\n\n**Regarding Weakness 4**:\n\nQ4.1. Could the authors provide a sensitivity analysis or ablation study for key hyperparameters to demonstrate the method's robustness?\n\nQ4.2. Could the authors provide a discussion of strategies for hyperparameter selection in practical applications?\n\nQ4.3. Could the authors provide a comparison of the number and sensitivity of hyperparameters in SSDE to those in baseline methods like CoTASP?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors proposed to separate sub-network sparse prompting into global and task-specific levels, which seems effective in their ablation study. \n\n2. The introduction of parameter-level masking and dormant neuron resetting techniques is beneficial for mitigating plasticity loss and keeping the learning capability of networks. \n\n3. Combining the proposed techniques, the overall framework achieves higher computation efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a new structure-based method for plasticity and stability tradeoff in continual RL. Specifically, the authors proposed (1) a sub-network co-allocation method which enhances meta sparse prompting by task-specific sparse prompting, (2) a finegrained masking method which only freeze exact parameters trained in previous tasks, and (3) a periodic resetting strategy for dormant neurons. However, this work is largely built upon previous works, which limits its technical contributions. And the reported experimental results are somewhat misleading. Thus, I lean towards reject at current stage." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper largely follows the technical parts of [1][2], which limits their own contribution. In detail,\n\n1.1. The authors claim a crucial contribution of introducing task-level sparse prompting, which has also been proposed in [1], where the task dictionary is obtained through a dictionary learning algorithm using previous tasks' optimized prompts and their embedding. Here the authors sampled the task dictionary from N(0, 1) which shares the same distribution of global dictionary. I don't see the motivation of this choice. Could the authors elaborate more on this part and also detail the difference between [1] and their method?\n\n1.2. In 4.2, since periodic dormant neuron resetting is well established in [2] for continual RL, it is inappropriate to claim this as a key contribution of their framework. In addition, the use of \"exploration\" is arguably misleading, which has various meanings in different contexts. I would suggest the authors use the specific term \"structural exploration\" throughout the paper for better clarification. \n\n2. There are many model choices made without sufficient justification. In addtion to 1.1, the proposed \"sensitivity dormant scores\" also lacks clear motivation. What is the advantage of this metric than previously defined dormant scores? And ablation study is necessary for justifying such choices. \n\n3. The experiments results seem misleading.\n\n3.1. Throughout the experiments, their reproduced results of CoTASP has a huge gap with the reported ones in CoTASP's paper (e.g. for P: 0.73 v.s. 0.92 in CW10 and 0.74 v.s. 0.88 in CW20), which has no essential difference with SSDE (CW10) or even better (CW20). And the generation performance of SSDE is not reported for a fair comparison with previous methods. \n\n3.2. In addition, the results are quite misleading given the reported P in CoTASP. In Table 4, the average success of SSDE w/o Dormant is 0.85 while that of CoTASP which also don't use Dormant neuron resetting achieves 0.92. Does this indicate that the proposed co-allocation strategy is inferior to the allocation method in [1] which also use sparse prompting? \n\n4. The overall frameworks contain a lot of hyperparameters, such as the sparsity controlling parameters in Equation (3-4), the trade-off paramter in Equation (6), and the dormant threshold in Definition 4.1, which make the framework less practical. In addition, no ablation results are provided to show the robustness of the system to these hyperparameters. \n\n[1] Yang, Yijun, et al. \"Continual task allocation in meta-policy network via sparse prompting.\" International Conference on Machine Learning. PMLR, 2023.\n\n[2] Sokar, Ghada, et al. \"The dormant neuron phenomenon in deep reinforcement learning.\" International Conference on Machine Learning. PMLR, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. If the differences between tasks are substantial, could using fixed forward-transfer parameters introduce issues that reduce flexibility?\n2. Can the method proposed by the author continue adapting to additional tasks, and can its task completion performance still outperform other methods?\n3. For additional issues, please refer to the weaknesses section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper addresses a critical issue in continual reinforcement learning: balancing plasticity and stability to mitigate catastrophic forgetting. The proposed method offers greater computational efficiency than existing approaches. Experimental results are promising, demonstrating that the proposed method achieves a higher success rate and outperforms other baseline methods on the CW10-v1 Continual World benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a structure-based method, SSDE, to enhance the plasticity-stability trade-off in continual reinforcement learning. It introduces a fine-grained allocation strategy that decomposes network parameters into fixed forward-transfer and task-specific trainable components. Furthermore, to improve the model’s exploration capability, this paper presents an exploration technique based on sensitivity-guided dormant neurons. Experiments conducted on the Continual World benchmark demonstrate that the proposed method achieves a superior success rate and outperforms current state-of-the-art methods in the CW10 task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper can be improved in terms of writing/presentation.\n2. From Table 3, it is evident that the author’s proposed method generally performs weaker than the ClonEx-SAC method. Therefore, will the ClonEx-SAC method continue to outperform the author’s method as the number of sequential tasks increases? Additionally, why doesn’t the forward transfer ability, or generalization ability, of the author’s method improve as the number of tasks increases?\n3. The proposed sensitivity-guided dormant neurons offer limited novelty.\n4. The author does not include comparisons with the latest regularization-based methods in continuous reinforcement learning, and the multi-task methods referenced are also outdated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1)Can we have more continual RL tasks? In the existing version, CW tasks may be not very convincing, especially CW20. Maybe you can refer to [a] for more RL tasks to evaluate continual RL methods.\n[a] Online Continual Learning for Interactive Instruction Following Agents, 2024\n(2)\\beta in Eq.6 is very important since it balance the stability and plasticity in CL. Could you show its sensitivity or how do you decide the optimal value." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1)SSDE not only achieves SOTA stability standards but also achieves competitive plasticity even when compared to strong behavior cloning baselines that benefit from data replay.\n(2)Experimental results demonstrate that SSDE outperforms current state-of\u0002the-art methods and achieves a superior success rate of 95% on the CW10-v1." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces SSDE, a novel structure-based continual RL method. SSDE formulates an efficient co-allocation algorithm that enhances sub-network allocation by increasing the capacity for trainable parameters while leveraging frozen parameters for effective forward transfer from previous policies. SSDE introduces a trade-off parameter to balance these two groups of parameters in its fine-grained inference process." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As shown in Table 3, SSDE takes no obvious advantages in F and FT metrics. These two metrics usually represent backward and forward transfer. So why Average Performance (P) is so good while F and FT not? I am a little confused. Also, ClonEx-SAC seems to perform comparably with SSDE on CW 20, although it replays data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Could the authors provide an intuitive explanation for the use of sparse coding in co-allocation? I agree with the general idea of enhancing plasticity with forward-transfer parameters [1], but I do not fully understand why they can be selected using sparse coding.\n\n---\n\n[1] Lin, et al. TRGP: Trust region gradient projection for continual learning. ICLR 2022." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is generally well written and very informative.\n* The technical contributions (fine-grained subnetwork allocation and dormant neuron re-activation) seem solid, both effectively addressing the stability-plasticity dilemma in continual reinforcement learning.\n* The experimental results on the Continual World benchmark show great improvements over the existing baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a structure-based continual reinforcement learning algorithm. To address the stability-plasticity dilemma in the previous subnetwork allocation approaches, the authors propose a fine-grained allocation strategy with two key designs: (1) The parameter space is decomposed into forward-transfer and task-specific components, which are co-allocated by sparse coding to enhance plasticity. (2) The dormant neurons are periodically reactivated to improve exploration and rapid adaptation in new tasks. The proposed method is validated on the Continual World benchmark and shows significant simprovement in performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The second contribution (dormant neuron re-activation) could be validated more carefully. There are already several works dealing with dormant neurons (e.g. [1, 2]), so a comparison with the existing literature in terms of method differences and experimental results would help to justify the contribution of this paper. Also, since the authors propose a new criterion for dormant neurons, it should be investigated how it overlaps with the original definition in [1], is it possible that it induces false positives and thus destabilizes the learning process? \n* The proposed method utilizes task embeddings from a pre-trained BERT, which limits its applicability in broader scenarios involving implicit knowledge that cannot be verbalized. For example, while BERT embeddings work fine in the Continual World benchmark involving manipulation tasks, they may be difficult to transfer to the Brax scenarios used in [3, 4] involving locomotion tasks. It would be interesting to see further experiments or discussions on this issue.\n* Regarding efficiency, the authors only discussed their allocation time, but there is no mention of total training time and model size. These are more representative metrics for evaluating the efficiency of continual reinforcement learning, and should be reported in detail in the paper. Also, it would be better if the authors could include an intuitive figure of the performance-size or performance-training time tradeoff, perhaps like Figure 1 in [3] and Figure 4 in [4].\n* There is a lack of sensitivity analysis for several hyperparameters, including $\\beta$ in Equation (6), $\\Delta$ in Equation (8) and $\\tau$ for thresholding. The paper determined these hyperparameters by grid search, but I am curious if their choice is critical to the final performance and difficult to choose in practice. The authors could present a sensitivity analysis of these hyperparameters to address my concern, and additional studies of their generalizability across different types of tasks would be appreciated.\n\n---\n\n[1] Sokar, et al. The dormant neuron phenomenon in deep reinforcement learning. ICML 2023.\n\n[2] Dohare, et al. Loss of plasticity in deep continual learning. Nature 2024.\n\n[3] Gaya, et al. Building a subspace of policies for scalable continual learning. ICLR 2023.\n\n[4] Sun, et al. Rewiring neurons in non-stationary environments. NeurIPS 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "SSDE improves Continual Reinforcement Learning by balancing plasticity and stability to prevent forgetting." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024plasticity,\ntitle={Plasticity from Structured Sparsity: Mastering Continual Reinforcement Learning through Fine-grained Network Allocation and Dormant Neuron Exploration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ENBquM4b4},\nnote={under review}\n}" }, "abstract": { "value": "Continual reinforcement learning faces a central challenge in striking a balance between plasticity and stability to mitigate catastrophic forgetting. In this paper, we introduce SSDE, a novel structure-based method that aims to improve plasticity through a fine-grained allocation strategy with Structured Sparsity and Dormant-guided Exploration. Specifically, SSDE decomposes the parameter space for each task into forward-transfer (frozen) parameters and task-specific (trainable) parameters. Crucially, these parameters are allocated by an efficient co-allocation scheme under sparse coding, ensuring sufficient trainable capacity for new tasks while promoting efficient forward transfer through frozen parameters. Furthermore, structure-based methods often suffer from rigidity due to the accumulation of non-trainable parameters, hindering exploration. To overcome this, we propose a novel exploration technique based on sensitivity-guided dormant neurons, which systematically identifies and resets insensitive parameters. Our comprehensive experiments demonstrate that SSDE outperforms current state-of-the-art methods and achieves a superior success rate of $95\\%$% on CW10 Continual World benchmark." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Continual reinforcement learning", "Policy transfer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6ec5fc4c7085bf4df09cdb326da3b80aff5f04a7.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Plasticity from Structured Sparsity: Mastering Continual Reinforcement Learning through Fine-grained Network Allocation and Dormant Neuron Exploration" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3EeyQNgKTP
Build Roadmap for Automated Feature Transformation: A Graph-based Reinforcement Learning Approach
main
Active
Automated Feature Transformation;Tabular Data;Multi-Agent Reinforcement Learning
other topics in machine learning (i.e., none of the above)
3;5;5
2;2;2
2;3;3
3;3;2
1;3;3
4.333333
2
2.666667
2.666667
2.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the authors elaborate on how they determined the weights for performance and complexity in the reward function? More detail on this could clarify the balance between the two objectives.\n\n2。 How does TCTO perform on high-dimensional datasets with over 10,000 features? Is the pruning strategy sufficient to maintain stability without compromising feature diversity?\n\n3. Were there any specific scenarios where TCTO’s backtracking mechanism was particularly beneficial in terms of model performance or feature diversity?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. While mostly clear, certain sections (e.g., cascading agent decision process) could benefit from additional details.\n\n2. The framework is well-supported by experimental evidence showing its adaptability across different datasets and improvement in downstream model performance.\n\n3. TCTO introduces a novel approach to automated feature engineering by employing a transformation-centric methodology with a graph-based roadmap, overcoming limitations of existing feature transformation methods.\n\n4. The approach’s ability to backtrack and optimize feature transformations dynamically makes it highly applicable in real-world ML tasks where feature diversity and stability are crucial." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an automated feature transformation framework designed to enhance downstream machine learning model performance. The TCTO framework leverages a reinforcement learning-based graph structure to maintain a roadmap of feature transformations, enabling efficient exploration and backtracking of transformation pathways. TCTO uses a multi-agent reinforcement learning approach, clustering and encoding transformation states to strategically apply feature transformations. Experiments on multiple datasets demonstrate TCTO's performance over existing methods by improving robustness and flexibility in feature generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While effective on a range of datasets, it is unclear how well TCTO scales with extremely high-dimensional data or very large datasets, as the pruning strategy may require fine-tuning in these cases.\n\n2. The cascading decision-making process is intricate, and further simplification or additional visuals might aid understanding.\n\n3. The reward structure combines performance and complexity, but further discussion on how these metrics are weighted could improve transparency and replicability of the model’s efficacy." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.How does the computational complexity of TCTO scale with larger datasets, and are there any strategies to mitigate potential performance bottlenecks?\n2.Are there scenarios or specific types of datasets where TCTO’s performance may be limited, and if so, what adjustments might be necessary to enhance its adaptability?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper has several notable strengths. Firstly, the authors present a well-motivated framework that addresses clear gaps in current automated feature transformation methods, such as the need for effective historical data utilization and robust backtracking. The proposed TCTO framework is innovative in its use of a graph-based roadmap and cascading multi-agent reinforcement learning, which enhance the flexibility and adaptability of the transformation process. Additionally, the authors provide a comprehensive experimental evaluation across diverse datasets, which convincingly demonstrates TCTO’s superior performance compared to traditional methods. This solid empirical foundation supports the framework's potential for broad applicability in feature engineering for machine learning tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors present TCTO, a graph-based reinforcement learning framework designed for automated feature transformation. The approach addresses limitations in current methods, such as the lack of historical insight utilization and insufficient flexibility in transformation exploration. By constructing a transformation roadmap with nodes representing feature states, TCTO leverages a cascading multi-agent system to dynamically select transformations, reuse effective paths, and prune inefficient ones. The experimental results demonstrate that TCTO outperforms existing methods in generating high-quality features, suggesting its potential to enhance feature engineering in machine learning tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While this paper offers a promising framework, it has some weaknesses. Firstly, the explanation of the cascading multi-agent system and its decision-making processes could benefit from more clarity and detail, as the current description may be challenging for readers to fully grasp without additional context. Additionally, the computational complexity of TCTO is not thoroughly analyzed, especially regarding scalability to larger datasets, which may impact its practical applicability. Finally, while the experimental results are extensive, the paper could further strengthen its claims by providing more insight into specific scenarios or datasets where TCTO may struggle, thereby clarifying the framework’s limitations and potential areas for improvement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* How were the small uncertainties in Table 1 achieved? How often were the experiments repeated?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* It can be seen (among other things from the large number of specific illustrations) that a lot of effort was put into preparing the paper" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper deals with the automated generation of features. The generation process consists of several steps, which are represented as a graph. The graphs are to be optimized using multi-agent reinforcement learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I find the text very badly written. Examples follow. The novelty and benefits of the method are hard for me to understand.\n* It seems to me that there is too much material for a conference paper, the number of pages is simply not enough to present it in a convincing way.\n\nDetails, examples and further comments:\n* I don't think “roadmap” is a suitable term, “schedule” or \"sequence\" would probably be better.\n* The title sounds strange. Wouldn't \"Optimization of transformation sequences for automated feature generation“ be better?\n* The abstract uses terms that are incomprehensible:\\\nmathematical feature-feature crossing\\\nthe roadmap of feature transformation\n* „Feature transformation task aims to generate high-value features and improve the performance of downstream machine learning tasks using the mathematical feature-feature crossing” needs to be reformulated.\n* \"Classic machine learning is highly dependent on the structure of the model, the activation function\" cannot be said in this way, it seems to refer exclusively to neural networks and not to classical machine learning in general.\n* A reference should be given for \"a cascading multi-agent reinforcement learning (MARL) algorithm\", because it is not generally known what “cascading multi-agent reinforcement learning” is.\n* “we present an innovative framework” -> “we present a novel framework”\n* In the loss function, Equation 8, the square is probably missing. \n* \"In this study, we introduce TCTO, an automated feature transformation framework. Our method emphasizes a transformation-centric approach, in which a transformation roadmap is utilized to systematically track and manage feature modifications.\" should be reworded. What is the information content? What should be expressed?\n* I think that the Abstract and Conclusion need to be completely rewritten." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024build,\ntitle={Build Roadmap for Automated Feature Transformation: A Graph-based Reinforcement Learning Approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3EeyQNgKTP},\nnote={under review}\n}" }, "abstract": { "value": "Feature transformation task aims to generate high-value features and improve the performance of downstream machine learning tasks using the mathematical feature-feature crossing. \nCurrent frameworks rely on iterative sequence generation with exploration optimization through performance feedback from downstream tasks.\nHowever, these approaches fail to effectively utilize historical decision-making experiences and overlook potential relationships among generated features, thus limiting the flexibility of the whole process.\nMoreover, the decision-making process lacks dynamic backtracking capabilities for each feature, leading to insufficient adaptability when encountering inefficient pathways, adversely affecting overall robustness and exploration stability. \nTo overcome these challenges, we present an innovative framework that employs a feature-state transformation graph to maintain the roadmap of feature transformation, with each node symbolizing a transformation state. \nDuring exploration, three cascading agents sequentially select nodes and mathematical operations to generate new transformation states.\nThis strategy leverages the graph structure's inherent properties, allowing for the preservation and reuse of sight-seen and valuable transformations. \nIt also enables back-tracking capabilities through graph pruning techniques, which can rectify inefficient transformation paths.\nTo validate the efficacy and flexibility of our approach, we conducted comprehensive experiments and detailed case studies, demonstrating superior performance in diverse datasets." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Automated Feature Transformation", "Tabular Data", "Multi-Agent Reinforcement Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/741a0ae34f509e38c93795f4fd3954341ede5866.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/22d2afa2f51dd27c3984077c662b697447aafce0.zip" }, "title": { "value": "Build Roadmap for Automated Feature Transformation: A Graph-based Reinforcement Learning Approach" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Fgylj4uqL
Interpretable Causal Representation Learning for Biological Data in the Pathway Space
main
Active
Causal Representation Learning;Intepretability;VAE;Genomic Perturbations;Health
applications to physical sciences (physics, chemistry, biology, etc.)
5;5;6;6
4;4;3;3
3;2;2;3
3;2;3;3
3;3;3;4
5.5
3.5
2.5
2.75
3.25
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- How sensitive is the model to the quality and completeness of the pathway knowledge used? Have you tested with different pathway databases or subsets of pathways?\n- How does the computational complexity scale with the number of biological processes? Is there a practical limit to how many processes can be incorporated?\n- Have you explored whether the causal relationships discovered by the model align with known biological pathway interactions beyond the examples provided?\n- Can you confirm that all genes involved in the double perturbations were also present in your single-perturbation training data?\n- Has $N$ and $\\tau$ been mixed up in the Hits@N metric?\n- How does table 2 show that “both models tend to assign most interventions to a small number of latent factors”?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Clear technical contribution that bridges causal representation learning with biological interpretability while maintaining theoretical guarantees\n- The paper contributes to causal representation learning for Perturb-seq data by introducing biological interpretability through pathway information, while maintaining the theoretical guarantees of discrepency-VAE.\n- Well written and clear presentation of the method and results.\n- Thorough ablation studies\n- Demonstrates interpretability of latent factors with concrete biological examples." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the challenge of learning interpretable causal representations for Perturb-seq data (gene expression in cells). The primary contribution is the novel introduction of masking to incorporate biological process (BP) knowledge into an existing method for causal representation learning (discrepency-VAE), which is named SENA-discrepancy-VAE. The masking ensures that latent factors can be interpreted as linear combinations of the activity of BPs. Since this modification is compatible with the discrepancy-VAE, the original model's theoretical guarantees for causal representation learning remain.\n\nThe method and ablated variants are evaluated on a Perturb-seq dataset collected from one particular cell line and is set up to minimize the overlap between the BPs. The results demonstrate that SENA performs similarly to discrepency-VAE in terms of reconstruction yet results in sparser and more interpretable results. Furthermore, by studying the contrast between inferred activity levels on perturbed and control samples the authors show that the latent factors can be associated with BPs and are therefore interpretable." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Experimental validation is limited to one dataset and no baselines other than their own ablations and discrepancy-VAE. The paper would benefit from comparisons to at least one of the other listed related works.\n- No comparison with simpler approaches like post-hoc interpretation of standard discrepancy-VAE latent factors.\n- While the link between latent factors and BPs is investigated, the quality of the discovered causal graph is not.\n- Given that the latent factors group a large number of BPs into a small number of latent factors there should be a deeper investigation of the biological plausibility and practicality of this result beyond the contrasting activations.\n- Readability of several figure texts should be improved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What is the intuition behind the $\\lambda$ hyperparameter to tune small influences of a gene on a biological process? Should this be a constant value throughout the mask matrix or would it be better to learn this influence via some type of attention weights?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well written with strong motivations behind using CRL techniques for biological applications.\n- The metrics proposed (differential activation, Hits@N) seem to be robust indicators of perturbation effects on BPs and downstream effects. I believe these evaluation metrics are one of the key interesting contributions of this work.\n- The empirical evaluation is exhaustive and illustrates some interesting observations, especially the representational capacity of the VAE-based SENA method compared to the traditional discrepancyVAE.\n- The interpretability analysis of the reparameterization layer is interesting and reveals which genes were affected the most upon perturbations. I do believe that exploring real-world applications of CRL is a very important direction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes to extend the discrepancyVAE interventional causal representation learning framework to biological processes applications. Specifically, the authors propose to embed prior knowledge about biological processes (BPs) through a framework called SENA-discrepancyVAE, which recovers latent factors that are a linear combination of a set of biological processes (pathways). The main idea presented in this work is to design a more flexible encoder class (SENA-\\delta) specific to mapping biological pathways to latent causal factors for interpretability. Empirical results show that the framework is shown to improve performance in predicting the effect of unseen perturbations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Although the application in gene regulatory networks is quite interesting, this work seems to be more of an evaluation study of the discrepancy-VAE framework proposed by Zhang et al. I do not see much of an added contribution beyond the original paper besides highlighting the application.\n- The difference in performance between the SENA variant and the original discrepancyVAE seems to be quite marginal in terms of representation in the double-perturbation scenario. For instance, in Table 2, the KL-divergence for double-perturbation prediction is only marginally better than the original MLP-based discrepancyVAE." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can you provide examples of BPs in the appendix? \n\nRegarding DAR evaluation: What happens if unnaffected pathways have very low action? Also, more general, how would you deal with impalanced pathways, which might lead in measuring large noise levels? \n\nIn table 2, for SENA λ=0.1, latent dim 105, the variance compared to original MLP and λ=0 is significantly lower (0.000081 vs 0.001087). Just double checking if this is corrent.\n\nL237: During filtering you end up with a (biased) set of BPs. How much do you think this can influence interpretability? Is there a risk of removing useful BPs?\n\nSuggestions\nL100: the word faithfully here gets confused with causal faithfulness. Please consider an alternative adverb if possible.\nL105: target instead of targets (remove final s)" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* Clarity: The paper is well written and easy to follow.\n\n* Novel Technical Contribution: The paper successfully extends causal representation learning to incorporate domain knowledge while preserving theoretical guarantees. The SENA-δ encoder architecture is a clever solution to balance interpretability and performance.\n\n* Practical Impact: The work addresses a significant gap in current causal representation learning methods for biological data, where interpretability is crucial for scientific insights. The ability to map latent factors to biological processes makes the model more useful for domain experts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents SENA-discrepancy-VAE, an extension of the discrepancy-VAE framework that incorporates biological pathway knowledge to produce interpretable causal latent factors. The authors modify the encoder architecture to map gene expression through biological processes (BPs) while maintaining the theoretical guarantees of the original model. The approach achieves comparable predictive performance to the original discrepancy-VAE while providing biologically meaningful latent representations and interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Limited Biological Validation: While the authors show statistical associations between perturbations and biological processes, there could be more validation using external biological knowledge or experimental validation of the discovered causal relationships.\n\n* Hyperparameter Sensitivity: The model introduces an additional hyperparameter λ that significantly impacts performance. While ablation studies are provided, more guidance on selecting this parameter would be valuable (this is important given that there's some large impact on the performance of the method)\n\n* Restricted Evaluation: The empirical evaluation is limited to a single dataset (Norman et al., 2019). Additional validation on different types of biological data would strengthen the claims of generalizability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Could the authors clarify whether multi-label or probabilistic pathway activity labels were considered as an alternative to binary labels? Binary labels may oversimplify gene activity levels, especially when certain pathways exhibit gradations rather than discrete on/off states.\n\n2) How would the model perform if pathway relevance were dynamically adjusted based on specific task contexts or experimental conditions? Pathway importance often varies, and adapting pathway relevance could improve model flexibility. A task-specific analysis to explore whether dynamically adjusting pathway selection enhances generalizability across datasets and biological contexts could provide insights into the model’s robustness.\n\n3) In the causal graph (Figure 3), how sensitive are the edge weights to the choice of λ and latent dimension? Please provide a sensitivity analysis.\n\n4) The mask matrix M (equation 2) assumes binary gene-pathway relationships. Have you considered using weighted relationships based on pathway databases' confidence scores?\n\n5) How does D_KL evolve during training for different λ values? Training curves would help understand if this is an optimization or regularization effect. Does this phenomenon persist if you randomize the pathway annotations while maintaining the same sparsity structure? This would determine if the benefit comes from biological knowledge or just sparsity.\n\n6) Have the authors empirically validated that the expectation in Equation (10) aligns with the observed experimental outcomes? Demonstrating this match would reinforce the theoretical assumptions and provide additional confidence in the model's causal interpretability." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1) The proposed model’s integration of biological pathway data as a prior in the causal representation learning (CRL) framework is interesting and practical, offering a biologically grounded solution to gene expression analysis. Through the direct alignment of the latent factors with biological pathways, SENA-discrepancy-VAE addresses the common limitation in CRL models of producing uninterpretable latent factors. \n\n2) The paper presents thorough experiments across multiple perturbation types, including single and double-gene knockouts, demonstrating the model’s robustness. Its generalization to unseen perturbations is compelling, and the ablation studies, which explore interpretability-reconstruction trade-offs, further validate the model’s design.\n\n3) The authors have done a good job communicating the importance of embedding biological processes into latent spaces, with visuals that illustrate BP-specific activity levels influencing latent factors. The use of causal graphs and the differential activation (DA) metric enhances transparency, allowing readers to trace latent factors back to biological functions. \n\n4) The model’s ability to provide interpretable predictions on cellular responses can aid in experimental design and offer insights into the potential effects of genetic or drug interventions. This approach addresses a pressing need in biomedicine for interpretable causal representation learning (CRL) models that can shed light on the intricate causal relationships underlying gene function and cellular processes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SENA-discrepancy-VAE, a novel model in causal representation learning (CRL) designed to make biological data analysis—especially from Perturb-seq experiments—more interpretable. A key innovation of this work is how it integrates biological processes (BPs) as prior knowledge, directly linking the model’s latent factors to known biological pathways. This approach fills an important gap in existing CRL methods, which often struggle with interpretability since they don't directly associate learned representations with actual biological mechanisms, making them less useful for real research applications.\n\nSENA-discrepancy-VAE builds on the standard discrepancy-VAE by introducing a pathway-based masking strategy within a new encoder, SENA-δ. This encoder uses a two-layer masked MLP where the first layer maps gene expression values to BP activity levels, with a tunable parameter that adjusts the influence of genes outside predefined pathways, giving the model flexibility in gene-pathway associations. The second layer models latent factors as combinations of these BP activities, which is a more realistic approach since biological interventions often impact multiple pathways. This setup stays true to the CRL assumption that each intervention targets a single latent factor but does so in a way that aligns with biological realities.\n\nThe authors evaluate SENA-discrepancy-VAE on a Perturb-seq dataset of leukemia lymphoblast cells. They show that the model performs as well as the original discrepancy-VAE on unseen perturbation combinations while providing greater interpretability by identifying specific BPs associated with each latent factor. This interpretability is validated through pathway-specific analysis, demonstrating the model’s ability to reveal biologically meaningful patterns in response to genetic interventions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) While the model effectively identifies single-point perturbations, it does not accommodate multi-step perturbations or capture the progression of cellular responses over time. In biological experiments, the cellular response often evolves in phases, with gene activity showing distinct transitions that are essential for understanding the effect of interventions.\n\n2) The model’s validation on a single dataset (K562 cell line data) restricts insights into its generalizability across different cell types or conditions. Testing on additional datasets, such as those from other cell lines or cellular environments, would offer a stronger assessment of robustness and applicability across a wider range of biological data.\n\n3) The model assumes static pathway relevance across all tasks, which may limit its adaptability in varied biological contexts where pathway importance changes with cell type or condition.\n\n4) The paper assumes that each intervention corresponds to a single latent factor, limiting the model's ability to capture complex interactions where multiple latent factors might be influenced by a single intervention. This simplification restricts the model’s interpretability in representing overlapping or interacting biological processes, which are common in gene expression dynamics.\n\n5) The model lacks a detailed exploration of how varying the number of latent dimensions impacts the interpretability and causal mapping of latent factors. Larger or smaller dimensions can influence the granularity of the factors and thus affect the biological insights the model can provide." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024interpretable,\ntitle={Interpretable Causal Representation Learning for Biological Data in the Pathway Space},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Fgylj4uqL},\nnote={under review}\n}" }, "abstract": { "value": "Predicting the impact of genomic and drug perturbations in cellular function is crucial for understanding gene functions and drug effects, ultimately leading to improved therapies. To this end, Causal Representation Learning (CRL) constitutes one of the most promising approaches, as it aims to identify the latent factors that causally govern biological systems, thus facilitating the prediction of the effect of unseen perturbations. Yet, current CRL methods fail in reconciling their principled latent representations with known biological processes, leading to models that are not interpretable. To address this major issue, in this work we present SENA-discrepancy-VAE, a model based on the recently proposed CRL method discrepancy-VAE, that produces representations where each latent factor can be interpreted as the (linear) combination of the activity of a (learned) set of biological processes. To this extent, we present an encoder, SENA-$\\delta$, that efficiently compute and map biological processes' activity levels to the latent causal factors. We show that SENA-discrepancy-VAE achieves predictive performances on unseen combinations of interventions that are comparable with its original, non-interpretable counterpart, while inferring causal latent factors that are biologically meaningful." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Causal Representation Learning", "Intepretability", "VAE", "Genomic Perturbations", "Health" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d682c5136d996ce63e84ed25d12be6a57104abf8.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Interpretable Causal Representation Learning for Biological Data in the Pathway Space" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3GMuudWmMV
Aya in Action: An Investigation of its Abilities in Aspect-Based Sentiment Analysis, Hate Speech Detection, Irony Detection, and Question-Answering
main
Active
Sentiment Analysis;Hate Speech Detection;Irony Detection;Question-Answering;Large Language Models;Few-shot Learning;Portuguese Language.
applications to computer vision, audio, language, and other modalities
1;3;5;6
4;4;4;5
2;1;2;3
1;2;2;3
3;2;3;3
3.75
4.25
2
2
2.75
0.676481
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Above" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-structured with a clear methodology.\n\n2. It offers some insights into Aya's performance in multilingual contexts and addresses challenges faced by low-resource languages." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper evaluates the performance of the multilingual Aya model across tasks such as Aspect-Based Sentiment Analysis (ABSA), Hate Speech Detection (HS), Irony Detection (ID), and Question-Answering (QA) in Brazilian Portuguese. Through a few-shot learning approach, Aya demonstrates competitive results in QA, surpassing some Portuguese-specific models, though it underperforms in tasks involving nuanced or slang-heavy language like HS. The study highlights Aya's potential in low-resource contexts while indicating the need for further tuning for certain language-specific tasks to match or exceed specialized models​" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper only conducts an evaluation of the multilingual large language model on a specific language, Brazilian Portuguese, even though Aya supports 101 languages. The tasks are limited to Aspect-Based Sentiment Analysis (ABSA), Hate Speech Detection (HS), Irony Detection (ID), and Question-Answering (QA). Many other NLP tasks could be studied, such as reading comprehension, syntax parsing, named entity recognition, and event extraction. The evaluation scope and language focus are limited, reducing the paper's contribution.\n\n2. Describing the equations for precision and recall in such detail seems unnecessary and only increases the document length without adding value.\n\n3. This paper lacks inspiring conclusions from the experiments. It only presents main results and a confusion matrix, without providing in-depth analysis through fine-grained evaluation or insights into the working principles of the Aya model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper is centered around a very interesting and highly relevant topic.\n\n- Clearly scoped tasks and objectives." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an evaluation of Aya, a multilingual language model, on four tasks including Aspect-Based Sentiment Analysis, Hate Speech Detection, Irony Detection, and Question-Answering, highlighting its strengths and limitations, particularly when compared to transformer-based models for the Portuguese language." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are a few points that must be addressed:\n\nThroughout the text it feels like the authors use aggressive speech, hate speech, and offensive speech interchangeably. For example, in Figure 1 we see offensive vs non-offensive. The authors should pay attention to this aspect: clearly define the specific type of abusive phenomena they are focusing on, and use that terminology consistently throughout their work – please look into the nuances surrounding the overlapping abusive phenomena (the distinction between hate speech, abusive language, and offensive language). See for example the work of Poletto et al.:\n\n*Poletto F, Basile V, Sanguinetti M, Bosco C, Patti V. Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evaluation. 2021 Jun;55:477-523.*\n\nI would have liked for the authors to spend a little bit more space detailing the methodology. The authors should provide the criteria used for selecting the examples, as well as the exact prompts used for all the tasks (not just QA):\n\n- How did the authors ensure that the examples were representative and diverse? \n- What was the exact input for the Aya model? The authors provide the prompt for the QA, but not for the other tasks. Does that mean that for the other, only the examples were used? I am asking this because I am surprised by the fact that for hate speech *‘the generation was more efficient when using the labels as numbers, instead of the actual labels’* (cf. lines 309-311). For example, I just asked Aya if it is familiar with the hate speech definition provided by the OHCHR, and the answer was positive. How would the generation have changed if including this type of information? Did the authors provide any task-specific instructions to the model beyond the few-shot examples?\n\nA more in-depth error analysis would have been interesting to have. The authors could consider a subset of the misclassified data and construct aggregate statistics over modes of failure which would then inform us how prevalent each of the kinds of mistake are. This would be useful for future research, as it would become possible to prioritize on which kind of misclassification to work on next.\nIn regards to the ABSA example provided, I don’t agree that *‘hotel’* has a neutral sentiment – it seems to be conflict (i.e., both positive and negative) or, we could say that the entity hotel has a positive sentiment towards the attributes location and service, but negative towards the attribute that would incorporate size/room description.\n\nWas there any hyperparameter tuning performed for the transformer-based models? Interesting results for the Albertina models on the ID task. \n\nSuggestions:\n\n- abbreviations should be presented once, at their first occurrence, and always used afterwards (except for the abstract)\n- I believe the paragraph starting on line 089 is actually a continuation of the previous one and does not require splitting\n- line 124: an -> a\n- line 161: a -> an" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Other than Portuguese have the authors considered evaluating their model for any other low resource language? That would provide a more comprehensive idea of the performance variance of the Aya across languages." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper addresses an important problem that aims to mitigate technological inequities towards low resource language.\nThe methodology used in this paper is reasonable and easy to understand. In addition, the paper also makes a thorough inquiry of related research before carrying out their work.\nThe experiment section provides adequate amount of evaluation to properly assess the performance of the model for a number of language tasks in a low resource language like Portuguese." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study aims to assess the performance of Aya, a multilingual generative model trained on a wide range of low resource languages and a variety of downstream tasks like Aspect-Based Sentiment Analysis, Hate Speech Detection, Irony Detection, and Question-Answering. The objective is to evaluate Aya's effectiveness in these tasks but only on the pre-trained model without any finetuning. This would reveal its potential to improve the quality and accuracy of outputs in various natural language understanding tasks. Instead, this work employs a few-shot methodology to evaluate the model's effectiveness as this approach is particularly better suited in abscence of extensive labelled data in low resource languages. Results indicate that while Aya performs well in certain tasks like Question-Answering for languages like Portuguese, for other tasks like Hate Speech Detection the performances were significantly underwhelming. These results suggest that multilingual models like Aya can perform competitively in some contexts but may require further tuning to match the effectiveness of models specifically trained for Portuguese." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This work is low in novelty despite addressing an important problem. Training a generative model for a low resource language like Portuguese is certainly important work. While the authors employ few shot learning as a logical workaround for the issue of low training data, their evaluation reveals the relative limitation of this approach after a point. The authors could further look into some technical innovations in this regard to improve performance of the Aya model in portuguese for tasks like Hate Speech or Irony Detection." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**Comments:**\n1. How do you ensure the examples of few-shot learning are representative and diverse? What is the measure for representative for example do you consider the classes or the related input text? \n2. L259-261, Do all the chosen questions begin with \"What\", “Where”, “Who” and “When” represents the dataset in a general way?\n3. L268-L269, \"we selectively remove instances from the dataset and include them alongside each test example during inference.\" -- Why do you remove instances from the dataset? The removed instances belong to which splits are not discussed.\n4. In Table 1, few-shot examples are 11 (SA), 10 (HS), 20 (ID), and 4(QA). Do you use the same examples for all inference data?\n5. \"In total, they mention nine different aspects, including four examples with negative polarity, four with positive polarity, and three that are neutral.\" -- Who mentioned? I believe there should be a citation.\n6. There are nine different aspects in the dataset, does every aspect represent only one sentiment? If not, how does the example represent the other classes that are not selected? \n7. Equations 1-6 are well known to the community, the information should be redundant.\n8. L421-422, why neutral is harder than positive and negative is not discussed properly.\n9. Comparing the results of two classes (Positive and Negative) with three classes (Positive, neutral, and negative) is not a good idea. Given that the model can easily differentiate two classes where prediction of multi-class is harder.\n10. Figure 2, the sum of the confusion matrix for columns is not 100. \n11. The authors stated that the model predicted exactly the same answer as the ground truth. However, it would be interesting to see some examples of those answers.\n12. SQUAD v1 is an old dataset and there is a possibility of adding this dataset to Aya's training data. It would be interesting to see the performance on some other QA datasets." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper uses state-of-the-art multilingual LLMs (Aya), that have been known for their capabilities for low-resource languages. Moreover, the study uses four different tasks (SA, HS, ID, and QA) ranging from classification to text generation/QA." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper assesses Aya’s performance in four tasks: Aspect-Based Sentiment Analysis, Hate Speech Detection, Irony Detection, and question answering in Portuguese language. The authors also compared Aya's performance with other language models. However, it is not understandable which one is from prior studies in Table 2. It is also unclear whether the Sabia-7B model is studied in this study or previous study." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of this paper is written poorly. The related work and the theoretical background sections are too long. The claim about the examples for few-shot learning could be partially correct but not properly correct. The details of prompting are missing in the paper. The performances are not well discussed in the paper.\nI believe the paper will be in good shape if the content is trimmed to a short paper rather than a long full paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024aya,\ntitle={Aya in Action: An Investigation of its Abilities in Aspect-Based Sentiment Analysis, Hate Speech Detection, Irony Detection, and Question-Answering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3GMuudWmMV},\nnote={under review}\n}" }, "abstract": { "value": "While resource-rich languages such as English and Mandarin drive considerable advancements, low-resource languages face challenges due to the scarcity of substantial digital and annotated linguistic resources. Within this context, \nin 2024, Aya was introduced, a multilingual generative language model supporting 101 languages, over half of which are lower-resourced. This study aims to assess Aya's performance in tasks such as Aspect-Based Sentiment Analysis, Hate Speech Detection, Irony Detection, and Question-Answering, using a few-shot methodology in Brazilian Portuguese. The objective is to evaluate Aya's effectiveness in these tasks without fine-tuning the pre-trained model, thereby exploring its potential to improve the quality and accuracy of outputs in various natural language understanding tasks.\nResults indicate that while Aya performs well in certain tasks like Question-Answering, where it surpassed Portuguese-specific models with an Exact Match score of 58.79%, it struggles in others. For the Hate Speech Detection task, Aya's F1-score of 0.64 was significantly lower than the 0.94 achieved by the Sabiá-7B model. Additionally, the model's performance on the Aspect-Based Sentiment Analysis task improved considerably when neutral examples were excluded, but its handling of complex slang and context-dependent features in other tasks remained challenging. These results suggest that multilingual models like Aya can perform competitively in some contexts but may require further tuning to match the effectiveness of models specifically trained for Portuguese." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Sentiment Analysis", "Hate Speech Detection", "Irony Detection", "Question-Answering", "Large Language Models", "Few-shot Learning", "Portuguese Language." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/eb37281b432581d69ead4acd8e3f017817d486c5.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Aya in Action: An Investigation of its Abilities in Aspect-Based Sentiment Analysis, Hate Speech Detection, Irony Detection, and Question-Answering" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3GTtZFiajM
Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge
main
Active
LLM;LLM-as-a-Judge;trustworthy LLM;evaluation
alignment, fairness, safety, privacy, and societal considerations
5;5;6;6
4;4;3;4
3;3;2;3
2;2;3;3
3;2;3;4
5.5
3.75
2.75
2.5
3
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How do you ensure that the generated perturbations effectively introduce the desired bias without altering the correctness of the content? How well do the LLMs understand the instructions for generating biased content? Could there be unintended consequences or biases introduced by the LLMs themselves?\n- Would incorporating a human evaluation benchmark provide additional insights into the accuracy and fairness of LLM judgments?\n- Are there potential trade-offs between mitigating biases and maintaining the performance of LLM judges?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Originality: The authors expand upon existing work by identifying and categorizing 12 distinct types of biases.\n- Quality: The paper presents a thorough evaluation of the identified biases across multiple LLMs, using diverse datasets and specific metrics tailored for judging tasks. This rigorous experimental design ensures the reliability and validity of the findings.\n- Clarity: The examples in Table 1 provide concrete examples of how biases manifest in LLM judgments, making the abstract concepts more tangible and relatable.\n- Significance: The proposed CALM framework offers a valuable tool for stakeholders to assess and mitigate biases, leading to more fair and reliable LLM evaluation methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the potential biases inherent in using Large Language Models (LLMs) as judges in various evaluation tasks, such as scoring and pairwise comparison. The authors propose a novel framework called CALM, which systematically quantifies and analyzes each type of bias by using automated and principle-guided modification. The paper evaluates six popular LLMs using the CALM framework and finds that while some models demonstrate notable fairness in judgment, significant biases persist in certain specific tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I think this paper is more like a toolkit paper rather than a novel research paper, as they just integrate 12 types of existing biases in LLM-as-a-Judge. If we look at the appendix B, we can find that each of the 12 types can be referenced to another previous paper.\n- The paper primarily relies on automated metrics to assess bias, but human evaluation could provide a valuable additional perspective. Incorporating a human evaluation benchmark would strengthen the validation of the findings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The discussion includes prejudice against certain groups such as “homosexual,” “black,” “female,” and “HIV-positive.” HIV-positive.” I would be concerned that there would be some impact on particular groups." }, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the source of the basis for these assessments of Robustness Rate (RR) and Consistency Rate (CR)? Why are human correlations such as Pearson's coefficient not considered in the assessment. \n2. LLM does not take into account some of the popular LLM as Judge models, such as pandaLM, Prometheus, etc. LLM lacks a specific LLM as Judge model. Lack of specific LLM as Judge evaluation. \n3. Is the data randomly selected? For example, GMSK, how to choose 100 pieces of data from tens of thousands of pieces of data? How to prove that these 100 data are representative enough?\n4. Is it user friendly? Is the reproduction cost prohibitive?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. A comprehensive delineation and classification of twelve specific biases that can compromise the dependability and credibility of LLM-as-a-Judge.\n2. The proposal of the CALM framework for assessing biases within LLM-as-a-Judge systems, which enhances the rigor of the assessment process in a person-independent manner.\n3. An in-depth analysis of six widely-used LLMs through the lens of the CALM framework." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study identifies 12 significant potential biases and introduces a novel automated bias quantification framework called CALM. This framework systematically quantifies and analyzes each type of bias in LLM-as-a-Judge through automated, principle-guided modifications. Empirical findings indicate that there is still room for enhancing the reliability of LLM-as-a-Judge. The paper explores both the explicit and implicit impacts of these biases and offers recommendations for the dependable application of LLM-as-a-Judge." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Lack of Transparency in Assessment Criteria: The source of the basis for the assessments of Robustness Rate (RR) and Consistency Rate (CR) is unclear. \nIncomplete Consideration of Popular Models: The evaluation does not include some well-known LLM as Judge models, such as pandaLM and Prometheus. This omission suggests a lack of thoroughness and may lead to biased or incomplete conclusions.\nQuestionable Data Selection Process: The method for selecting data is not well-defined. For instance, in the case of GMSK, the process of choosing 100 pieces of data from tens of thousands is not explained. This raises concerns about the representativeness and reliability of the selected data.\nUser-Friendliness and Reproduction Costs: There are concerns about the user-friendliness of the system and whether the costs associated with reproducing the results are prohibitive. This could limit accessibility and practical application for users." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "073: This is not limited only to \"humanities, social sciences, or general knowledge\" fields, a refined answer could be in any field or task.\n\n281: If i understand correctly, for CR you ask the LLM to generate two responses for the same prompt, and check if the answers are consistent. If so, why not use more than two responses? You can make the consistency measure more robust by comparing the variance of N>>2 generations. \n\n295: Which ACC do you use as a metric?\n\nMetrics paragraph: In my opinion, and even though the English is good and I understand each sentence, this paragraph is not clear enough. I would emphasize in the text which metric is used for each task, specifically at the start of the paragraph, and refer to the column in the table. In addition, the names, abbreviation are not consistent throughout the paper and specifically in the figures. Please see the weaknesses regarding the metrics." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I believe the paper is strong, well-written, and highly comprehensive. The topic is both timely and important, and the NLP/LLM community would greatly benefit from its publication. In my opinion, this paper should be accepted. The reason I initially rated it a 6 instead of an 8 is to encourage the authors to consider revising the metrics (as discussed in the weaknesses section)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors identify 12 distinct types of biases in LLM-as-a-Judge models, provide automated code for \"checklisting\" these biases in LLM judges, and conduct a comprehensive analysis of how these biases manifest in state-of-the-art LLM judges across various datasets. The key finding is that LLM judges are not robust. The implications of these findings are significant and should be effectively communicated to the community." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Metrics**: I have a few suggestions regarding the metrics used in this paper and how they are presented in the results. First, for the RR and CR metrics, I recommend making the CR metric more robust by sampling multiple generations when computing the CR for a given instance. Additionally, I propose adjusting the RR metric with the CR, as the authors note that LLMs are non-deterministic, and a low RR score might reflect this rather than a genuine lack of robustness. To adjust the score for each individual instance, I would subtract the individual CR from the individual RR. It is important to compute this adjustment at the individual level, as the CR varies between instances. The final score would be the dataset average of RR_i - CR_i. While this metric is more complex and falls within a -1 to 1 scale, it is far more reliable. If you choose not to present the difference between the two, I suggest at least presenting the average RR and average CR for each LLM in the results. The CR is not a constant value; it varies between models and across instances.\n\nRegarding the ACC metrics, I am unclear about which specific metric you are using in the results section. Should we compare the two metrics and examine the variations between them? Additionally, is this score presented as an absolute value? Could you clarify this aspect to ensure an accurate interpretation of the results? And why do you use \"hack\"? Isn't it essentially the CoT ACC? \n\nRegarding the Error Rate (ER) metrics, could you explain the rationale for using these metrics instead of the RR/CR in the paper? Additionally, could we apply the ER metrics to detect other forms of bias? I also find the ER_SE metric unreliable. From your description, it appears that Y_other represents the score assigned by other models to the evaluated model's response. However, I believe Y_other should represent the average score assigned by the explained model to the responses of other LLMs. This would better measure whether the LLM prefers its own responses. Otherwise, you're merely capturing the LLM’s general bias relative to the consensus. For example, one LLM might use scores in the range of 3-7, while another uses 1-6, yet they could still achieve a perfect Spearman's correlation. Moreover, why do you use absolute value? This can be misleading. For example, y_self, y_other = (5, 3) is the same as (1, 3). I believe you can think on a better metric for ER. \n\nIn general, I would recommend adjusting the metrics so that higher scores indicate more bias, unlike the current ones where higher scores represent robustness. Since you frequently use the term \"bias\" throughout the paper, this modification could make the results and the interpretation of the metrics more intuitive and easier to follow. \n\nIf the authors revise the metrics and provide this analysis, or at least clarify what I may be misunderstanding, I would be happy to increase the overall rating from 6 to 8.\n\n**Misinterpretations of the Results:** Specifically, the statement \"Bias is more pronounced in the alignment dataset compared to the fact-related dataset\" cannot be inferred from the results. The biases in these datasets differ, and you need to compare like for like - either the same bias across different datasets or the same dataset with different biases. While it's possible to compare the CR metrics between datasets (difference of differences), I believe this is insufficient on its own. First, I would like you to clarify (both here and in the paper) the rationale behind distinguishing between biases and datasets, as well as why certain biases may not be applicable to all datasets. I haven’t given this much thought, but it is critical that this distinction is explicitly explained in the paper, rather than leaving it up to the reader to infer." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How do we know that evaluations that rely on perturbations by LLMs can be trusted? How do we know that such perturbations do not introduce errors or biases other than those which are intended to be evaluated?\n- How could one interpret the results of evaluating a bias using the CALM framework in terms of its effect on real-life applications of the LLM-as-a-judge system? For example, if one LLM's robustness rate for diversity is 0.1 greater than another's, how does this the actual treatment of minorities by systems that utilize the LLM in a LLM-as-a-judge application?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper covers a comprehensive list of biases and conducts experiments on many state-of-the-art LLMs and across multiple relevant domains such as fact-based and alignment-related data.\n- They introduce a novel method for evaluating biases in LLM-as-a-judge systems that is well-principled and automatic. It is also flexible as the framework could even be extended to bias types that are not considered in the paper. \n- They systematically demonstrate that current LLMs are still susceptible to various biases. As far as I am aware, many of their evaluation results are completely novel, such as demonstrating how different types of fake-authorities interfere with LLM-judges to varying degrees." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces CALM, a framework for measuring bias in LLM-as-a-judge applications. The framework works by modifying the prompt or response that is to be judged by introducing a bias, and then measuring how this modification affects the judgement. They propose a classification of biases into 12 distinct types, each of which can be measured by their framework. The introduces typology covers a broad spectrum such as bias based on the length of answers, the use of an authoritative tone or faux citations. \n\nTo evaluate the magnitude of these biases when current LLMs are used as judges the paper introduces various metrics, most importantly the robustness of a judgement when a bias is added to an answer. Biases are evaluated on three types of datasets which are sampled from existing sources. They cover factual data for which responses should be evaluated according to factual accuracy, alignment related data for which judgements depend on user preferences, and refinement aware evaluation data which contains pairs of responses in which one is a refinement of the other.\n\nFor its main results, the paper evaluates the biases of multiple state-of-the-art LLMs when used in an LLM-as-judge system. The results demonstrate that current models are susceptible to various biases in their judgements. Some noteworthy findings include:\n- All models are significantly impacted by position bias.\n- Most models judge their own output more favorably than that of other models, even when sources are anonymized.\n- Different types of fake-citations influence an LLM's judgement to various degrees. Quote- and book formats have a higher chance of being convincing than URL citations.\n\nThese as well as other findings are discussed in the results section." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Main Concerns\nI have two concerns related to the soundness of the paper's methodology and experimental results. \n\nThe paper introduces a new method for evaluating biases but does not evaluate the trustworthiness of this method. The method is based on perturbing the responses that an LLM-as-a-judge is supposed to evaluate. In some cases this evaluation works via a separate LLM such as for the verbosity bias, fallacy-oversight bias, or authority bias where GPT-4-Turbo is used to rephrase a response. How do we know that using an LLM to modify responses does not introduce errors or other features that manipulate the judge's decision? While I can believe that GPT-4-Turbo is capable of applying the required modifications, this should be experimentally verified so that the results have scientific rigor. \n\nFurther, the paper provides scores of LLM-judge biases in the form of the robustness rates, but I can not tell what these scores mean for real-life applications of said LLM-judges. For example, LLM-as-a-judge is typically used for evaluation, such as a reward model during RLHF. If my LLM-as-a-judge system has a specific robustness score for a bias type such as diversity, how does this translate to the bias of a downstream system, such as an LLM that was trained using the judge? Without such results, it is unclear how to interpret the paper's numerical results.\n\nSumming up, I believe two types of experiments are necessary to improve the paper's soundness. \n- Demonstrate that perturbation using LLMs does not introduce unintended errors or biases that are different from what is intended.\n- Evaluate the effect of different bias scores for different LLM-as-a-judge systems on real-life applications of the systems.\n\nIf related experiments or arguments are added, I will improve my score.\n\n# Minor Comments (did not impact score)\n- It would be helpful to have example questions and responses from each dataset somewhere in the paper, to illustrate what the difference between e.g. the fact-related and alignment-related dataset is. They could be included in the appendix if there is no space in the main paper.\n- Figure includes some plots where a larger value of the y-axis means less bias (robustness rate) and some for which is a lower value is better (error rate). The plot would be easier to read if this was indicated somehow." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024justice,\ntitle={Justice or Prejudice? Quantifying Biases in {LLM}-as-a-Judge},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3GTtZFiajM},\nnote={under review}\n}" }, "abstract": { "value": "LLM-as-a-Judge has been widely utilized as an evaluation method in various benchmarks and served as supervised rewards in model training. However, despite their excellence in many domains, potential issues are under-explored, undermining their reliability and the scope of their utility. \nTherefore, we identify 12 key potential biases and propose a new automated bias quantification framework—CALM—which systematically quantifies and analyzes each type of bias in LLM-as-a-Judge by using automated and principle-guided modification. Our experiments cover multiple popular language models, and the results indicate that while advanced models have achieved commendable overall performance, significant biases persist in certain specific tasks. Empirical results suggest that there remains room for improvement in the reliability of LLM-as-a-Judge. Moreover, we also discuss the explicit and implicit influence of these biases and give some suggestions for the reliable application of LLM-as-a-Judge. Our work highlights the need for stakeholders to address these issues and remind users to exercise caution in LLM-as-a-Judge applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM", "LLM-as-a-Judge", "trustworthy LLM", "evaluation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/39c4fdd7f0cb600718984348a99acfb81ae33dfd.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/29f10f8da6359a118f02cccabb8e3cf0919a921d.zip" }, "title": { "value": "Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Gga05Jdmj
CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation
main
Active
Controllable Image Generation;Image-to-Image Generation;ControlNet;LoRA;Resource-Efficient Adaptation
generative models
5;5;6;6
5;4;5;3
3;3;3;3
2;2;3;3
3;2;3;3
5.5
4.25
3
2.5
2.75
-0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-organized and easy to follow.\n- The authors conduct sufficient ablation studies to evaluate the proposed modules.\n- The experiments demonstrate the training efficiency of the proposed method and its capability to unify various visual conditions for generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes CtrLoRA for better controllability of the conditional image generation. This framework trains a Base ControlNet for the general image-to-image generation and then uses the LoRA fine-tuning for specific user instructions. Experiments show the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors train a base ControlNet for the subsequent LoRA fine-tuning. However, why not directly fine-tune a pre-trained ControlNet or Uni-ControlNet?\n\n- Lack of comparison to: ControlNet++[1].\n\n- The paper does not explore whether this method can be generalized to other diffusion models such as SDXL and Pixart.\n\n[1] Li M, Yang T, Kuang H, et al. ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback[C]//European Conference on Computer Vision. Springer, Cham, 2025: 129-147." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The results in Figure11b demonstrate that the different conditions are effectively disentangled, with a direct summation module according to Figure 3c. Could you clarify why this module is effective, such as presenting the results of two elements both separately and after sum-up.\n\n2. A detail, why not presenting all 9 base-condition results comparison to UniControl in Table 2?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper focus on an important problem, extending ControlNet to a lightweight manner.\n2. Experimental results are impressive, especially the convergence experiment." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes CtrlLoRA, a two-stage parameter-efficient fine-tuning pipeline, to ease the original ControlNet's computation burden in terms of different conditions. The authors evaluate CtrlLoRA through extensive experiments by both the quality and the computation efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In line 70, the authors state that ControlNet with Canny edges requires 3 million images over 600 GPU hours for one condition. In contrast, line 244 indicates that Base ControlNet necessitates millions of images for 6000 GPU hours for 9 conditions. Although it is not fair enough, but it implies that the proposed method does not significantly reduce the computational burden.\n\n2. In line 239, the mechanism of training with 9 conditions is not clear enough. As different conditions have different levels of sparse information of input images, why they have equal training iterations? And continuous shifting between different conditions may make the training hard.\n\n3. the motivation why the new conditions are not trained as the Base ControlNet by a shifting mechanism is not clear enough.\n\n4. Most results are from \"Base CN + CtrlLoRA'', and results from \"Community Model + CtrlLoRA\" in Figure 11a are rare, not enough to convince that CtrlLoRA is effective when transferring to other community models.\n\n5. pretrained-VAE seems to be only an interesting trick.\n\n6. putting all the prompts in the appendix makes reading inconvenient." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Regarding the discussion of \"Adaptation to new conditions,\" while training a comparison method from scratch with a small amount of data may indeed result in slow convergence, what would be the results if we used a pre-trained conditional model (analogous to possessing a Base ControlNet) for fine-tuning?\n\n- I'm curious about the performance between a pre-trained controlnet model available in the community and a model trained using proposed \"Base + LoRA\" with same conditions." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- To address the high cost of separately training different models for conditional generation tasks, this paper proposes a training method that transitions from a base controlnet model to a lightly fine-tuned lora model. This approach ensures generation quality while achieving a faster convergence rate.\n\n- The paper shows many analyses of the proposed method and presents the results generated for a total of more than a dozen conditions.\n\n- The paper is well structured and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper draws on the idea of combining a base model with PEFT (Parameter-Efficient Fine-Tuning) for controllable generation. It trains a Base ControlNet obtained through several condition-specific training processes, and then fine-tunes it with a small amount of data for newly introduced conditions to obtain different condition-specific LoRAs. This approach improves the efficiency of training new condition generators at a lower cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper primarily aims to improve the training efficiency of all kinds of conditional models, hence it employs a series of LoRAs to train the newly introduced conditions based on the \"Base ControlNet\". However, there is relatively little comparison and discussion of existing methods that efficiently train ControlNet, such as T2I-Adapter, ControlLoRA, and SCEdit.\n\n- There currently exists a viable **controlnet-union** model, which can handle different conditions using a single model. This may be a higher-level representation of the training of the \"Base ControlNet\" model discussed in the paper. On the other hand, the use of LoRA for fine-tuning is relatively straightforward and has been implemented in previous community works, such as ControlLoRA. In comparison, the overall innovativeness of the paper is limited.\n\n- The paper does not discuss how many conditions to use or how to select conditions for training the \"Base ControlNet\" to achieve optimal knowledge transfer effects." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Consider specifying 1-2 new image conditions and key metrics (e.g., adaptation speed, data efficiency, performance) for comparing UniControl [1] fine-tuning to CtrLoRA. This would provide a clear, focused comparison.\n2. Additional baselines are required for each base image condition. Comparisons should be made with a fully trained ControlNet, which has been trained exclusively under a single image condition, to establish a more comprehensive benchmark.\n3. Similarly, for the new condition, it is essential to compare the performance of CtrLora against ControlNet when ControlNet has been fully trained on a single modality. This will provide a clearer understanding of their relative efficiencies.\n4. It would be beneficial to explore how the number of image conditions used during the training of the base ControlNet affects its ability to learn new conditions. Insights into the scalability and adaptability of the base network could prove crucial for future applications.\n5. I have noted that CtrloRA can perform low-level image enhancement tasks, such as low-light image enhancement. Could the authors demonstrate how CtrloRA performs in comparison to other diffusion models for low-light image enhancement? This could highlight potential advantages or unique features of CtrloRA in practical applications.\n\n[1] UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The CtrloRA framework introduced in this paper allows users to quickly and efficiently fine-tune the ControlNet to new image conditions, with minimal resource consumption. The experimental results validate the effectiveness of this method. Additionally, the paper is well-structured and clearly written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a CtrloRA framework. This framework starts by training a basic ControlNet that handles various image conditions efficiently. With this trained network, one can quickly fine-tune it to adapt to new conditions using a task-specific LoRA—specifically, fine-tuning requires only 1,000 paired images and less than an hour on a single GPU. The experimental results confirm that this method greatly speeds up the training process for new image conditions. Based on these impressive findings, I recommend a weak acceptance. However, there are some unclear points and missing experiments in the paper (see the Question section), and my final decision will depend on the authors' responses to these issues." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are some unclear points and missing experiments in the paper (see the Question section), and my final decision will depend on the authors' responses to these issues." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024ctrlora,\ntitle={CtrLo{RA}: An Extensible and Efficient Framework for Controllable Image Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Gga05Jdmj},\nnote={under review}\n}" }, "abstract": { "value": "Recently, large-scale diffusion models have made impressive progress in text-to-image (T2I) generation. To further equip these T2I models with fine-grained spatial control, approaches like ControlNet introduce an extra network that learns to follow a condition image. However, for every single condition type, ControlNet requires independent training on millions of data pairs with hundreds of GPU hours, which is quite expensive and makes it challenging for ordinary users to explore and develop new types of conditions. To address this problem, we propose the CtrLoRA framework, which trains a Base ControlNet to learn the common knowledge of image-to-image generation from multiple base conditions, along with condition-specific LoRAs to capture distinct characteristics of each condition. Utilizing our pretrained Base ControlNet, users can easily adapt it to new conditions, requiring as few as 1,000 data pairs and less than one hour of single-GPU training to obtain satisfactory results in most scenarios. Moreover, our CtrLoRA reduces the learnable parameters by 90% compared to ControlNet, significantly lowering the threshold to distribute and deploy the model weights. Extensive experiments on various types of conditions demonstrate the efficiency and effectiveness of our method. All codes and model weights will be publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Controllable Image Generation", "Image-to-Image Generation", "ControlNet", "LoRA", "Resource-Efficient Adaptation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3a5e48f0c23fbf52c7f2a14ab218f0738fdcaad5.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Gzz7ZQLiz
Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents
main
Active
Large Language Models;LLM agent;Web automation
applications to robotics, autonomy, planning
3;5;8;8
4;4;2;4
1;3;3;4
2;2;3;4
3;2;3;4
6
3.5
2.75
2.75
3
-0.544331
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Can you add more description of how self-contextualization works? This is the identical contextualization prompt just uses the LLM agent model (e.g. Gemini-1.5-flash) instead of the trained Phi-3-instruct model. \n\nTable 3 in appendix: Data and caption do not match. 33*15 = 495. Are the numbers the number of successful demonstrations collected? Some more information on the demonstration collection would be helpful. \n\nDoes figure 8 include both successful and failed tasks? -> Are the distributions over the same tasks? \n\nLine 182: “select the one that provides the most relevant context for the LLM agent to accurately predict the next action at as the target.” \n- This could be written more clearly. \n\nWill the model being released?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "LCoW shows a very clear benefit and improvement to LLM web agents. \n\nLCoW shows state of the art improvement on WebShop against strong baselines such as AgentQ and LASER. This is comparable to human expert level on WebShop tasks. \n\nThe experiments are pretty comprehensive showing improvement on different benchmarks with different agents and show continuous benefit for up to 3 iterations of LCoW training. \n\nThis method does not seem like it would add a ton of compute expenses and could be quite practical. \n\nThe paper is also well written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "LCoW aims to advance LLM based web agents by adding a contextualization step to the webpage observation in which an LLM reduces the html/raw observation by pruning irrelevant elements and adds contextual descriptive information. This significantly improves the performance of the downstream web agent and sets state of the art results. \n\nThe algorithm works by first collecting successful trajectories as ground truth. Then a contextualizer model (with a prompt instructions) is used to reduce and explain the UI elements. This now contextualized observation is give to a set of LLM agents that produce actions. The contextualized model gets a high reward if the agents give the same action as in the successful trajectory. The best contextualized observation is then collected. Finally the model is trained to produce the \"good\" contextual observations. This can be repeated for multiple iterations. \n\nThe main contributions are:\n- The novel approach to LLM-based contextualization and parsing, enabling state of the art performance on web agent datasets. \n- The algorithm for training the contextualization model\n- The prompt for contextualization\n- Many experiments\n\nTheir results include experiments on\nWebShop and WebArena across multiple LLM agents with strong baselines in WebShop (such as AgentQ and LASER). \nThey also include ablations/analysis on how the action matching reward improves with iterations, how LCoW affects the number of steps required for each task, and comparisons of the original collected trajectories against behavior cloning," }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are a few weaknesses of the proposed method, though some of this is more limitations than reasons to reject. \n\n1) It is unclear how this method translates across websites, domains, and to some extent tasks. Since this involves training the contextualization model, there is potential to overfit to the data available. It would be nice to have some experiments showing that LCoW trained on a few domains generalizes to many other domains. Perhaps LCoW when generalizing to a task on say LinkedIn after only being trained on a couple benchmarks. \n\n2) This is also true for generalizing over tasks as well. Perhaps LCoW fails when extending to tasks that require more contextual reasoning. \n\n3) The training details section notes that action matching based on parsing is infeasible for open-ended actions (such as filling in a text box) and uses GPT-4o to do matching. However, the bigger limitation is on open-ended tasks or tasks that have diverse ways/orders of completing them. How would LCoW when there are many actions that are reasonable? \n\n4) In real-world situations there may be many rollouts that include individual actions that are actually incorrect. LCoW would treat these as correct and may even train the model to drop the truly relevant areas of the page. \n\n5) The limitations section only notes novel UI elements as a limitation. It seems the limitation section should be expanded to cover some of the above concerns as well. \n\n6) This approach also relies on being able to collect successful trajectories, whereas other methods that employ search may be able to extend agent capabilities to new tasks.\n\n7) There are no experiments comparing to code-based html parsers for LLM agents. Though they are undoubtably not as good or there would already be models with performance comparable to LCoW. \n\n8) How long does the contextual observation generation take? In tasks that rely on parsing large amounts of text (e.g. Write a tweet based on this article), regenerating and contextualizing a whole article could be expensive, time consuming, and not necessary. (This should be addressed)\n\nThere are a few nice to have experiments that are not present: \nGeneralization across task type or difficulty\nGeneralization across websites" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.The reward obtained from multiple LLMs is only used to judge whether the current step correctly predicts the real action. Should the final expectation of task be used?\n2.How does LCoW handle web pages with novel UI elements or layouts that were not encountered during training?\n3.Have any measures been taken to prevent overfitting, particularly given the iterative training process that relies on self-generated data?\n4.Can the web page be partitioned and analyzed through the prompt function? And there is no comparison with the previous intelligent code analysis work." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The paper presents comprehensive experiments on popular benchmarks, demonstrating that LCoW significantly improves the performance of LLM agents across various scales. The success rates surpassing human experts are particularly impressive.\n2.The paper shows that the contextualization module trained with LCoW can generalize well to different LLMs, including those not involved in the training process, which is a strong indicator of the method's robustness.\n3.The proposed iterative algorithm for training the contextualization module is a effective approach that allows for continuous improvement of the module based on real-world interactions and feedback." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces LCoW, a novel framework that addresses the challenge of enhancing decision-making capabilities of Large Language Models (LLMs) in the context of web automation tasks. The method distinguishes the comprehension of web content from the decision-making process by training a dedicated module that creates contextualized representations of intricate web pages." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.As mentioned in the paper, the contextualization module struggles with web pages containing UI elements not seen during training. This limitation could be a barrier to the framework's real-world applicability, especially given the vast diversity of web page designs.\n2.The contextualization module was trained on a relatively small dataset of fewer than 2,000 self-generated samples. Can the model's ability to generalize to a broader range of web pages and tasks?\n3.The paper does not extensively discuss the potential for overfitting, especially given the iterative training process that relies heavily on self-generated data. There is a risk that the model may perform well on similar tasks but fail to adapt to new, unseen scenarios.\n4.The contextualization module shown in Figure 7 is not intuitive enough." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How do you know the performance gain is enhanced by the LLM's decision-making?\n- Figure 1: How do you select the reported tasks from WorkArena?\n- How do you ensure that crucial elements are not removed during the contextualization?\n- It seems the contextualization module could only be trained with successful trajectories from LLMs. What about those tasks that even those LLMs could not fulfill?\n- The WorkArena contains up to 1,000 task instances, why do you evaluate only 115 tasks? How do you select them?\n- If the contextualization is shopping relevant, do you believe it's less convenient to write several human oracle rules than to train a contextualization module?\n- Figure 8: The mean of the number of steps doesn't seem to differ much. What is the exact number?\n- It seems the behavior cloning baseline is trained with fewer demonstrations than the contextualization module." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The observation contextualization idea is clever, and the training of the contextualization doesn't require human-labeled data.\n- The results reported on the shopping tasks are promising, proving that the idea should work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to contextualize the observation of LLM-based online shopping agents to improve their performance. It trains a task-observation-reliant contextualization module to help locate the most important information on a page and provides explanations. The idea is clever and shows promising results on two shopping benchmarks. However, it doesn't include code or any playing episodes for the reviewers to verify the outcomes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I like the observation contextualization idea, but I've seen a highly ranked paper on the WebArena benchmark, a benchmark with a wider type of websites defined other than mere shopping here, using a similar but more general method that doesn't require task-related inputs. I believe the strong reliance on the web observation's format, as you mention in the limitation section, \"it often struggles to provide suitable contextualization for web pages containing UI elements unseen during the training,\" constrains this work's scope on shopping-related tasks only.\n- I would question your results as you didn't include code or episodes for reviewers to verify your conclusions, especially when you only play your agents on a partial WorkArena benchmark, whose results are easily controllable if you only select the tasks where your agents win.\n- I think you have sacrificed the agent's generalizability with the contextualization module specifically trained on the shopping tasks, as you put in the appendix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Explain the performance of different choices of LLM-based contextualization modules\n* Discuss the practical efficacy of these said agents given the cost/tokens for using the reward models and the lack of simulation environment for new unseen web sites." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Pipelines for LLM-based web agents are complex and the proposed approach breaks down \"contextualizing\" the web pages separately from decision making ability of the agents. The approach of leveraging simulation data across the web environments to train (fine tune) a small model for contextualizing shows good results. The reward model is in essence a LLM-based judge system across multiple powerful LLMs. The qualitative results are interesting as they highlight how the proposed contextualization module works in removing irrelevant components of the web page." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes training a language model to contextualize complex web pages for improving the success rates of LLM-based web agents. To enable this the proposed method uses the web simulator environments to roll out multiple trajectories and uses multiple LLMs to score the different candidates. This strategy provides a significant improvement over baseline open and closed-sourced models across different benchmarks like WebShop and WebArena." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* With respect to generalization capabilities, the study can be strengthened by demonstrating performance across the different web environment bechmarks or types of web pages (e.g, instead of picking or holding out 500 examples randomly the type of Web tasks could be used for creating the train/test held out set). \n\n* Additionally, it is not clear if in the real world environment a simulation environment is available to bootstrap and roll out the candidate sequence of state and action(s). As listed in the limitations section the power/promise of these agents diminishes given that performance drops when dealing with unseen UI elements. \n\n* For different web benchmarks, different LLMs were used for training the contextualization module. This needs to be explained or justified." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Training language models to automatically contextualize web page observations to enhance the decision-making of LLM agents" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning to Contextualize Web Pages for Enhanced Decision Making by {LLM} Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Gzz7ZQLiz},\nnote={under review}\n}" }, "abstract": { "value": "Recent advances in large language models (LLMs) have led to a growing interest in developing LLM-based agents for automating web tasks. However, these agents often struggle with even simple tasks on real-world websites due to their limited capability to understand and process complex web page structures. In this work, we introduce LCoW, a framework for Learning language models to Contextualize complex Web pages into a more comprehensible form, thereby enhancing decision making by LLM agents. LCoW decouples web page understanding from decision making by training a separate contextualization module to transform complex web pages into comprehensible format, which are then utilized by the decision-making agent. We demonstrate that our contextualization module effectively integrates with LLM agents of various scales to significantly enhance their decision-making capabilities in web automation tasks. Notably, LCoW improves the success rates of closed-source LLMs (e.g., Gemini-1.5-flash, GPT-4o, Claude-3.5-Sonnet) by an average of 20%, and demonstrates a 33.5% average improvement in success rates for open-source LMs (e.g., Llama-3.1-8B, Llama-3.1-70B) on the WorkArena benchmark. Moreover, the Gemini-1.5-flash agent with LCoW achieves state-of-the-art results on the WebShop benchmark, outperforming human experts." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "LLM agent", "Web automation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/917a517ab6936b1f63a1e8f4a62ed4d8566566a9.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3HPOtZxs5s
An Efficient Quantum Classifier Based on Hamiltonian Representations
main
Active
quantum computing;quantum machine learning;variational quantum circuits;quantum encoding
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;3;3
5;3;3;4
3;3;2;2
1;2;2;2
2;3;3;3
3
3.75
2.5
1.75
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What happens to your method’s performance when it is run on noisy quantum hardware? \n\nHow resource-intensive is it to make your method fault-tolerant?\n\nWhy did you not try HAM and PEFF on the Fashion-MNIST dataset? Apologies if this was already covered, but I struggled to find it." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Strengths: The paper is exceptionally well-written. The exposition is clear, the protocol is well explained, and the background well-done. The idea to embed states in a Hamiltonian and then leverage VQE is interesting as well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Summary: In this paper, the authors put forward a new quantum machine learning (QML) classifier. Their new approach leverages the variational quantum eigensolver algorithm to find a parametrized quantum circuit that classifies inputs based on their expectation value. Underlying their new approach is their idea to embed input states within a Hamiltonian." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses: The article has three main weaknesses.\n - No theoretical performance guarantees: There are no theoretical results guaranteeing any a super-polynomial improvements over classical methods. \n - Weak results on simulated data: The new method is routinely outperformed by other methods including logistic regression on both text datasets. Sure, some of the competitors require more parameters, but none of the other models are honestly that big. These results would be far more compelling if there was a clear super-polynomial advantage with the new method.\n - No discussion of robustness to noise: This is a big one. There’s no discussion or evaluation of the methods robustness to noise. Unless you’re proposing a fault-tolerant algorithm, you need to discuss noise and compare your method’s performance in the presence of NISQ-era levels of noise to classical competitors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How to choose the {${P_j}$}$_{j=1}^p$ in Eq.12?\n2. is there any theoretical analysis that the generalization error is bounded with respect to the number of finite terms $P$?\n3. instead of using parametrized quantum circuit, whether is it enough to only optimize the the parameters in measurement (parametrized hamiltonian? like https://arxiv.org/pdf/2307.10560)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "it is well-written and clearly presents the concept and idea." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the Hamiltonian classifier, a quantum machine learning approach that enhances the data encoding by mapping inputs to Pauli strings, and provides related proof-of-principle experiments to demonstrate the advantages." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. the similar idea was proposed in Post-variational quantum neural networks (https://arxiv.org/pdf/2307.10560)\n2. missing some theoretical analysis of the performance of the proposed model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In the complexity analysis, section 3.6, you mention that your method incurs an additional $1/\\epsilon^2$ cost. Is this the same for other methods presented in Table 1?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper is clearly written and scientifically sound. The improvement in gate complexity is substantial, since no other methods achieve logarithmic scaling for these quantities.\nI believe the paper provides a good advance in the field of variational quantum circuits." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors provide a new method to implement variational quantum circuits (VQCs) that can be used for machine learning tasks using quantum hardware. This method achieves a logarithmic scaling in both qubits and quantum gate counts, while having worse sample complexity (as presented in Table 1). They then numerically simulate their quantum algorithm on text and image classification, and achieve results that are mostly better than other forms of VQCs and on-par with classical neural networks (such as MLPs and CNNs)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main critique of this paper is that the method achieves mostly the same performance than very simple classical models, such as logistic regression, MLPs and CNNs, while being based on having access to ideal quantum hardware which is not readily available (and will only be made available in the long-term). Therefore, the main value of the paper is an algorithmic advance for VQCs (and in how data is encoded into the quantum computer), but not for the global field of machine learning itself.\nSimulations that include noise could be useful, which hopefully would show that even on noisy quantum hardware the results are not significantly altered.\nThe benchmarks are somewhat sparse, and some of them include results that have 100% test accuracy for many methods, so we cannot know which is better. Considering more difficult datasets would be more interesting.\nThe maximum number of qubits considered is $n=10$ if I am not mistaken, which is significantly smaller than what can be simulated with even small computational resources ($n=20$ is feasible in the noiseless setting)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Here are some minor questions:\n(1) On Page 6, authors claimed that they can randomly define $p$ Pauli strings, and decompose the data matrix $H_{\\phi}(\\tilde{x})$ onto the sampled Pauli basis. Then, is there any creteria on selecting these Pauli strings? What is the scaling of the parameter $p$, and what is the relationship between $p$ and the power of Hamiltonain classifier model (such as generalization error upper bound or the effective dimension)? \n(2) In equation 10, $\\alpha_i$ has an exponentially small factor $2^{-n}$. Does this factor cause the measurement accuracy to become exponentially small, leading to a large amount of measurement overhead?\n(3) In Table 1, Cong et al. (2019) do not utilize the QCNN to solve a classical task; instead, they predict the quantum phase transition problem. Is it fair to compare Cong et al. (2019) to the authors' work on a classical task?\n(4) In Tables 2 and 3, it is observed that the proposed methods (HAM, PEFF, and SIM) do not outperform all the listed methods. Given these facts, what is the advantage of the proposed Hamiltonian classifier method?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "From a high-level perspective, this paper is well-written, clearly introducing the research background and presenting their results. Meanwhile, the authors have put significant effort into the numerical simulation section, where they compare various related quantum-classical methods across several datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper argues that previous NISQ quantum machine learning algorithms generally require a large number of computational resources, which may not be suitable for NISQ devices. To address the data-encoding problem, the authors encode classical data into a physical Hamiltonian, referred to as the Hamiltonian classifier. To demonstrate the power of the Hamiltonian classifier, they numerically test their model on several popular classical datasets, including text and image classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) One of the main concerns is that the \"Hamiltonian Classifier\" has been proposed and studied in several papers, such as [S. Jerbi et al., Shadows of Quantum Machine Learning, Nat. Comm., 2024] and [Y. Song et al., A Quantum Federated Learning Framework for Classical Clients, Sci. China Phys. Mech. Astron. (2024)]. However, this paper does not cite these highly relevant works. As a result, the authors' contributions are significantly diminished, especially the claim in Sec. 3, page 4, \"To the best of our knowledge...\"\n\n(2) As a high-level conference (ICLR) in the field of machine learning, we expect to see more surprising results in quantum machine learning. The authors still utilize the standard classical-quantum workflow, despite encoding the data into a physical Hamiltonian. While many papers adopt this framework and numerically benchmark their methods on some datasets, such research contributes very little to quantum computation theory and the quantum machine learning community, particularly at this stage. This is supported by recent findings: it has been shown that many classical-quantum QML methods (including the authors') are classically simulable when the model does not suffer from the barren plateaus phenomenon [A. Angrisani et al., Classically Estimating Observables of Noiseless Quantum Circuits, arXiv:2409.01706]. Furthermore, when the QML algorithm is limited to a 2D architecture, all constant-depth (or constant evolution time) quantum-classical approaches can be classically simulated [S. Bravyi et al., Classical Algorithms for Quantum Mean Values, Nat. Phys., 2021]. From this perspective, it appears that the Hamiltonian classifier method may not provide a clear quantum advantage." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "An efficient quantum machine learning method named \"Hamiltonian classifier\" that achieves logarithmic complexity in both qubits and gates by representing inputs as measurements rather than using traditional state preparation" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024an,\ntitle={An Efficient Quantum Classifier Based on Hamiltonian Representations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3HPOtZxs5s},\nnote={under review}\n}" }, "abstract": { "value": "Quantum computing shows great potential for expanding the range of efficiently solvable problems. This promise arises from the advantageous resource and runtime scaling of certain quantum algorithms over classical ones. Quantum machine learning (QML) seeks to extend these advantages to data-driven methods. Initial evidence suggests quantum-based models can outperform classical ones in terms of scaling, runtime and generalization capabilities. However, critics have pointed out that many works rely on extensive feature reduction or use toy datasets to draw their conclusions, raising concerns about their applicability to larger problems. Scaling up these results is challenging due to hardware limitations and the high costs generally associated with encoding dense vector representations on quantum devices. To address these challenges, we propose an efficient approach called Hamiltonian classifier inspired by ground-state energy optimization in quantum chemistry. This method circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings and computing predictions as their expectation values. In addition, we introduce two variants with different scaling in terms of parameters and sample complexity. We evaluate our approach on text and image classification tasks, comparing it to well-established classical and quantum models. Our results show the Hamiltonian classifier delivers performance comparable to or better than these methods. Notably, our method achieves logarithmic complexity in both qubits and quantum gates, making it well-suited for large-scale, real-world applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "quantum computing", "quantum machine learning", "variational quantum circuits", "quantum encoding" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/449f128da0f62da574fd361fc9ca4025f2e35d32.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "An Efficient Quantum Classifier Based on Hamiltonian Representations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Hg5ufmfRu
ACE: Attack Combo Enhancement Against Machine Learning Models
main
Withdraw
machine learning security and privacy;membership inference;attribute inference;property inference;adversarial examples
alignment, fairness, safety, privacy, and societal considerations
Yugeng Liu;Zheng Li;Hai Huang;Michael Backes;Emiliano De Cristofaro;Yang Zhang
~Yugeng_Liu1;~Zheng_Li17;~Hai_Huang4;~Michael_Backes3;~Emiliano_De_Cristofaro1;~Yang_Zhang15
3;5;6;6
4;5;4;4
2;2;3;2
2;3;3;3
2;3;3;3
5
4.25
2.25
2.75
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Adding additional evaluations with mode-free attack baselines as well as defense strategies will significantly improve the paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea of combining different types of attacks to enhance the performance of the primary attack is interesting.\n2. The performed evaluations mostly cover the key points made in the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the problem of conducting combined attacks by leveraging some secondary of different type to enhance the performance of the primary attack. For example, attackers can use property inference attacks first to better guess the target property distribution and then generate higher quality auxiliary datasets to further assist in the performance of the attribute inference attacks. The attack results indicate that the combined method outperforms the single attack, indicating the the strength of attacks in practice can be much more effective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The performance evaluation is heavily focused on leveraging shadow model based attacks. I think a comprehensive evaluation should also include some model-free attacks (e.g., LiRA for membership inference attacks), and this is important because, shadow model based approaches do not always outperform the model-free attacks in all settings. And the current combined method strongly binds to the shadow model based attack and so, the inclusion of the model-free attacks is necessary. \n2. The evaluation of existing defense for the primary attacks should also be included. Some results such as showing that combined attacks can make the defense cost significantly higher (e.g., DP based defenses have to sacrifice more utility to empirically resist the attack effectiveness).\n3. The related work section also misses some relevant works [1], [2], which both consider combining training and test-time types of attacks, given that the authors believe Wen et al.'s work as relevant.\n\n[1] Feng & Tramer, \"Privacy Backdoors: Stealing Data with Corrupted Pretrained Models\", ICML 2024. \n\n[2] Tian et al., \"manipulating transfer learning for property inference\", CVPR 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "This paper violates plagiarism policies. According to iThenticate/Turnitin results, the authors directly copied the following content from a previous publication [1]: (1) Section 5.3, covering Attribute Inference and Membership Inference, and (2) Sections A.2 and A.3. The plagiarized material spans approximately one and a half pages, which is a substantial infringement of ethical guidelines.\n\n[1] Liu, Yugeng, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, and Yang Zhang. \"{ML-Doctor}: Holistic risk assessment of inference attacks against machine learning models.\" In 31st USENIX Security Symposium (USENIX Security 22), pp. 4525-4542. 2022." }, "flag_for_ethics_review": { "value": [ "Yes, Research integrity issues (e.g., plagiarism, dual submission)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to my comments for more details." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The attack vector is interesting and easy to understand\n2. The attack setting is common in real-world applications\n3. The results of the individual/combo attack seem promising" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores \"attack combos\" where multiple ML attacks are combined, enhancing overall impact. Traditional studies examine attacks individually, but adversaries often employ multiple strategies at once. Focusing on four attacks—adversarial examples, attribute inference, membership inference, and property inference—the authors introduce a taxonomy for attack interactions across three stages: preparation, execution, and evaluation. They identify four effective combos, demonstrating their effectiveness across various ML models and datasets. A toolkit, ACE, is also developed to support research in this area." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper violates plagiarism policies. According to iThenticate/Turnitin results, the authors directly copied the following content from a previous publication [1]: (1) Section 5.3, covering Attribute Inference and Membership Inference, and (2) Sections A.2 and A.3. The plagiarized material spans approximately one and a half pages, which is a substantial infringement of ethical guidelines.\n\n2. The novelty of this paper appears limited. The authors primarily evaluate four standard attacks and their combinations on three basic datasets. Some key advanced attacks, such as model stealing and backdoor attacks, are equally critical in this domain and also need to be thoroughly evaluated and discussed.\n\n3. The lack of critical details makes it difficult to understand what the authors did or what motivated their choices. For instance, while the paper claims that the combo attack is more effective than individual attacks, this is not clearly explained in the methodology and evaluation sections.\n\n4. The paper does not clarify whether the proposed ACE tool can be used to evaluate the impact of inference attacks on commercial ML models, such as those provided through Machine Learning as a Service (MLaaS). \n\n5. Writing needs to be improved. Please proofread the whole paper carefully to correct typos and grammar errors.\n\n[1] Liu, Yugeng, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, and Yang Zhang. \"{ML-Doctor}: Holistic risk assessment of inference attacks against machine learning models.\" In 31st USENIX Security Symposium (USENIX Security 22), pp. 4525-4542. 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- To the best of my knowledge, this is the first work to investigate attack combinations across different categories. This is important to develop more secure and reliable models.\n- The released toolkit is beneficial to further exploring the threat of other kinds of attack combinations.\n- The experimental results show that certain combinations of attacks perform better than existing attacks and are insightful for developing stronger attacks and more robust models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This is the first study that investigates the combination of different types of test-time adversarial attacks, including adversarial examples, attribute inference, membership inference, and property inference. The authors decompose the attack pipeline into three phases, i.e., preparation, execution, and evaluation. Through experiments, the authors identify four effective combinations of attacks, such as property inference assisting attribute inference in the preparation phase and adversarial examples assisting property inference in the execution phase. The authors also develop a modular and reusable toolkit that helps investigate effective attack combinations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The architecture of the attacked model is limited. To support the generality of the findings, it would be helpful to attack Transformer-based models, which are known to behave differently than CNNs against adversarial attacks.\n- The experiments are conducted on relatively small datasets. It would be helpful to investigate the behavior of attack combo on large-scale datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In ADV2MemInf, the authors chose to constrain the amount of perturbation using the $l_2$ norm. What is the reason behind this design choice, given that both Square and PGD are applicable with different $l_p$ norms? A broader question is: What kind of criterion are the authors using when choosing specific attacks during the experimental evaluation? For instance, using more recent adversarial example generation techniques than PGD might give better results when combined with MemInf.\n2. Other prevalent inference-phase attacks are model extraction (or model stealing) and model inversion. Why have the authors chosen not to include these attacks within the ACE framework? For example, shadow models used in membership inference could potentially improve model extraction, or vice versa.\n3. In Table 2, PropInf2AttInf in CelebA does not improve the F1 score (only 1 pp on average) and the VGG19 model trained on the CIFAR10 dataset. What is the reason behind this while the remaining results show some improvement?\n4. In page 8, lines 419-420, the authors state that the overfitting does not have a significant impact on the membership inference attack. This conclusion is false; as demonstrated by other state-of-the-art works, overfitting, in fact, affects the membership inference attack (Shokri et al., 2017; Liu et al., 2022b)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. ACE is a novel framework that combines different inference-time attacks with the aim to enhance each other's performance. \n2. The rationale for each attack combination aimed at improving the primary attack is well-founded.\n3. The experimental setup is clear enough to replicate experiments if necessary." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors investigate intentional interactions among various inference-phase attacks against ML models and how an adversary can leverage the information obtained from one type of attack to increase the effectiveness of another attack. The proposed framework, ACE, combines various attacks in different phases of attack implementation (preparation, execution, and evaluation phases). Empirical results in three different datasets show that attack performance is increased when they are combined. I think ACE is a novel tool for sophisticated adversaries, but the paper lacks a systematic evaluation (see weaknesses and questions) and might need a revision to improve the quality/discussion." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper focuses on improving the performance of the attacks using various attack combinations. However, the paper lacks an evaluation of how common defense mechanisms (against adversarial examples, membership inference, etc.) affect the effectiveness of such enhanced attacks.\n2. Although the authors empirically show that the effectiveness of the attack increases when they leverage the information from another inference-phase attack, the paper lacks disussion on the efficiency. For example, the implementation of MemInf and PropInf requires the training of numerous shadow models. Although adversarial examples increase the effectiveness of MemInf, they do not reduce the attack's cost and further complicate the attack mechanism, potentially rendering it impractical as an attack strategy. \n3. Although the authors consider both white-box and black-box settings, the population of attack strategies in the paper is not enough to establish a benchmark tool as detailed as, e.g., ML-Doctor or TrojanZoo." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nliu2024ace,\ntitle={{ACE}: Attack Combo Enhancement Against Machine Learning Models},\nauthor={Yugeng Liu and Zheng Li and Hai Huang and Michael Backes and Emiliano De Cristofaro and Yang Zhang},\nyear={2024},\nurl={https://openreview.net/forum?id=3Hg5ufmfRu}\n}" }, "abstract": { "value": "Machine learning (ML) models are proving to be vulnerable to a variety of attacks that allow the adversary to learn sensitive information, cause mispredictions, and more. \nWhile these attacks have been extensively studied, current research predominantly focuses on analyzing each attack type individually.\nIn practice, however, adversaries may employ multiple attack strategies simultaneously rather than relying on a single approach.\nThis prompts a crucial yet underexplored question: when the adversary has multiple attacks at their disposal, are they able to mount or enhance the effect of one attack with another?\nIn this paper, we take the first step in studying the intentional interactions among different attacks, which we define as attack combos. \nSpecifically, we focus on four well-studied attacks during the model's inference phase: adversarial examples, attribute inference, membership inference, and property inference. \nTo facilitate the study of their interactions, we propose a taxonomy based on three stages of the attack pipeline: preparation, execution, and evaluation.\nUsing this taxonomy, we identify four effective attack combos, such as property inference assisting attribute inference at its preparation level and adversarial examples assisting property inference at its execution level. \nWe conduct extensive experiments on the attack combos using three ML model architectures and three benchmark image datasets.\nEmpirical results demonstrate the effectiveness of these four attack combos.\nWe implement and release a modular, reusable toolkit, ACE. \nArguably, our work serves as a call for researchers and practitioners to consider advanced adversarial settings involving multiple attack strategies, aiming to strengthen the security and robustness of AI systems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Yugeng_Liu1", "~Zheng_Li17", "~Hai_Huang4", "~Michael_Backes3", "~Emiliano_De_Cristofaro1", "~Yang_Zhang15" ] }, "authors": { "value": [ "Yugeng Liu", "Zheng Li", "Hai Huang", "Michael Backes", "Emiliano De Cristofaro", "Yang Zhang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "machine learning security and privacy", "membership inference", "attribute inference", "property inference", "adversarial examples" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "liu|ace_attack_combo_enhancement_against_machine_learning_models" }, "pdf": { "value": "/pdf/fae7730f9756d2d4f27f28d8bd0bbd711f7f7f5b.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ACE: Attack Combo Enhancement Against Machine Learning Models" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Hy00Wvabi
WorkflowLLM: Enhancing Workflow Orchestration Capability of Large Language Models
main
Active
Large Language Models;Process Automation;Workflow;Tool Learning
generative models
5;5;5;8
4;4;4;3
2;2;2;3
1;2;2;3
2;3;1;3
5.75
3.75
2.25
2
2.25
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Q1)The “pass rate” is an important metric to evaluate the performance in section 4.3. Could you elaborate more on how it is calculated? What are the reasons to choose it? And how is related to training/fine-tuning phases?\n\nQ2)Could you contextualize the size and the quality of the generated dataset in terms of how significant it is in comparison with SOTA or related works? Are they general enough for RPA for just for applications around Apple Shortcuts and RoutineHub?\n\nQ3) The papers also mention about fine-tuning Worfflow Lalama and annotator in a very short paragraph in section. Could you elaborate more about setup, why such training parameters are chosen?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1)The observation and problems are interesting and relevant to the conference\n\nS2) The dataset might have potential uses\n\nS3)The experiments involve many systems" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper argues that state-of-the-art models like GPT-4o face challenges in effectively handling complex workflows. To address this, the paper introduces WorkflowLLM, a data-centric framework designed to enhance the workflow orchestration capabilities of LLMs. Central to this framework is WorkflowBench, a large-scale fine-tuning dataset comprising 106,763 workflows, 1,503 APIs, and 83 applications across 28 categories. WorkflowBench is constructed through a three-phase pipeline: data collection from sources like RoutineHub, query expansion using ChatGPT to create diverse and complex workflows, and workflow generation leveraging a trained annotator model for high-quality workflow synthesis. The dataset is enriched with Python-style code, comments, and hierarchical thought structures to improve LLM learning.\n\nBased on WorkflowBench, the paper fine-tunes Llama-3.1-8B, resulting in WorkflowLlama, which demonstrates superior performance compared to existing models, including GPT-4o, on workflow orchestration tasks. WorkflowLlama achieves notable success in generalization to unseen APIs and tasks, evaluated using metrics such as CodeBLEU and Pass Rate. Additionally, WorkflowBench exhibits robust zero-shot generalization on the T-Eval benchmark, achieving an F1 plan score of 77.5%." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1) The scientific or technical contributions are limited as the key contributions of paper are around data curation effort. And the significance of data size and quality is not clear (see Q2). \n \nW2) Section 3 mentions many steps to generate the needed data for its workflow using ChatGPT but the Quality control protocol is not very clear. Even the author provides some examples and algorithms in appendix, the details are very descriptive . Also section4.2 mentions using human evaluator to re-label the sampled data,It’d be better to provide more details towards the quality control protocol for this human-driven process as well.\n\nW3) Many technical details are not very clear (see questions)\n\nW4) Code is provided in supplemental material but there is no document how to reproduce the experiment results ( also see Q3)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Are the prompts used in the framework that drive the LLMs usage automatically generated? If yes, how to ensure the quality of the prompts? If not, how to scale up?\n\nFor each step from data generation to model evaluation, how to optimize the prompts?\n\nThe description on quality confirmation is a bit vague. The example given about the issues is not clear. How are such issues detected? How to prompt ChatGPT to refine A’ and P’, and how to ensure the quality of the refinement? How to perform the rule-based filtering, e.g., how to automatically detect if parameter constraints are violated or not?\n\nFor OOD settings, how are the other models evaluated? Do they also be given the same input like the one for Workflow Llamma, i.e., providing the list of required APIs? \n\nGiven synthetic data takes the large portion of the final benchmark (91k out of 11k) but the quality is unsure, it would be interesting to also see how Workflow Llama performs on the real-word dataset, i.e., without the synthetic data, compared to other models? \n\nThe proposed framework outperforms all the other models by an extremely large margin in the first four columns in Table 2 (such as 9.4% vs 0.5). This seems to be suspicious and needs further examination and explanation. \n\nIn Table 2, both CodeBLEU and Pass Rate are used, one examines the similarity of code and one examines if the code is executable. It would be interesting to combine both of them, such as setting a threshold on the CodeBLEU score when checking the Pass Rate to compare models, since Pass Rate itself doesn’t necessarily indicate the success of the workflow orchestration." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written and easy to follow. Examples are given to illustrate the important steps. \n\nThe extended dataset improves the diversity and complexity of the samples in the training data, allowing the models to adapt to the situations that are closer to real-world applications. \n\nThe experimental results show that the proposed framework outperformed the existing models by a large margin." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework that leverages LLMs to automate workflow orchestration. It extends the real world dataset by creating synthetic data through LLMs, to train the model. It conducts experiment study to compare the proposed framework with several proprietary and open source models, using CodeBLEU and Pass Rate as the metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper mainly relies on prompting LLMs to generate the results of the key steps, such as data generation, commenting actions, and plan generating. It seems that there is a large space of prompt engineering to improve the performance but left to be undone. \n\nThe proposed framework bypasses the API selection when handling the OOD setting by directly serving the required APIs as the input. This greatly simplifies the problem of automating workflow orchestration. \n\nThe paper relies heavily on LLMs for the entire process, including generating the dataset, training the model, and evaluating the models (only less than 82% accuracy in a small sample with 330 instances in total). This weakens the technical contribution and the reliability of the experiment results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Potential ethical considerations were taken into account in the Appendices." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I am curious about the choice of ChatGPT to run both the query expansion phase of the dataset creation (section 3.2) and the experiments (sections 4.1 and 4.2). Altough I think it was a good idea, why rely on a proprietary model (the GPT-4o) with proprietary and non-disclosed prompt engineering behind it? Did the authors try to use directly the GPT-4o model? Did they try other models to run this phase? I assume that the \"ease-of-use\" of the ChatGPT is a fair point in favor of its use, but it should be evident in the paper." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-presented and well-written overall. The authors identified a clear pain point in achieving an agentic LLM approach for process automation: lacking LLMs capable of correctly describing complex workflows. Although partially addressed by other works, as pointed out by the authors in the related work section (section 2), their approach is innovative in dealing with fairly more complex workflows than previous solutions.\n\nTheir approach to overcoming this limitation is sound and clever. They provide a fine-tuned LLM specialized to handle this kind of task.\nTo do so, they craft a well-defined dataset, and it is possible to highlight the process of dataset creation as one of the paper's main contributions, as crucial as the constructed dataset and the fine-tuned model.\nAfter the data collection, the query expansion phase is especially interesting because it uses another LLM (e.g., ChatGPT) to help overcome the lack of diversity and complexity of the gathered initial data with syntactic data. The same applies to the clever use of ChatGPT in the evaluation phase.\nCompared with other LLMs solving the same kind of problem, the presented results show the potential use of the author's solution to deal with the initial established problem, i.e., the automation of complex tasks.\nWorkflowLlama outperforms other LLMs when the workflow demands generalization to unseen instructions and unknown APIs.\n\nI also highlight the good approach to evaluating the effectiveness of the LLM-based evaluator (section 4.2), which strengthens the paper's arguments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a detailed explanation of the construction and evaluation of the WorkflowBench dataset, which contains many examples of workflow descriptions. The dataset is later used to fine-tune the open-source LLM Llama-3.1-8B.\nThe dataset creation follows a well-established and potentially reproducible approach. It starts with data gathering, expands to increase data diversity, and generates a final workflow dataset, enhancing real collected data with synthetic data.\nThe fine-tuning intends to overcome the limitations of existing LLMs in workflow orchestration and provide a consistent improvement in process automation by proposing an agentic approach.\nThe fine-tuned LLM called WorkflowLllama is detailed, and its capabilities are evaluated, showing a solid capacity to orchestrate complex workflows. Its performance was compared with commercial and open-source LLMs using the test set of Workflowbench." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The background on automation's relevance (lines 36-38) might be condensed, as automation's importance is widely recognized.\n- In section 3.1, the author can better explain the $\\mathcal{D}$. The other elements were presented before, but  $\\mathcal{D}$ is directly introduced on line 248 without prior explanation.\n- Lines 245-247 read (my emphasis):  \"This bottom-up [...], efectively **ensuring content realiability** and **minimizing the risk of hallucination**\". These are pretty strong assumptions without solid evidence, especially if generalized to every LLM. Do the authors know published papers that confirm them (since this paper is not about reducing hallucinations)? If not, the phrase can be removed or rephrased to be more humble.\n- The paper lacks a clearer section on threats to validity. The authors provided some in Appendix E, but they should be incorporated into the main text.\n- The paper also lacks a more evident running example/use case. Although the authors provide a small case study in section 4.7 and some examples in Appendix D, a consistent running example, incorporated throughout sections, could clarify the model's applications and emphasize practical use cases.\n\nMinor issues:\n- Line 52 mentions GPT-4, while the rest of the paper uses GPT-4o(-mini). Maybe it is only a typo, or the work mentioned indeed uses the GPT-4 model. Since they are different models, it can be good to double-check every reference to ensure it always talks about the correct model, maybe highlighting somewhere they are not the same.\n- The size of Figure 2 can be increased to improve readability.\n- Figures 3,5 and 6 could improve accessibility through higher contrast colors.\n- Although well-written, the paper is sometimes repetitive (e.g., sections 1 and 3). Proofreading with this issue in mind may help the authors achieve a better final text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The performance of generic LLMs in workflow orchestration through agent-like approaches would serve as an interesting baseline for comparison. Through this comparison, we could better understand whether the core challenge lies in the model's reasoning capabilities or in its ability to maintain consistent output formats. This point won't affect the score; it's just for discussion." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Accurate and valuable problem identification. The paper identifies the limitations of existing work in handling real-world workflows: difficulty in processing long-sequence workflows with complex logical relationships like control flow. This observation directly addresses practical application pain points, making the motivation very solid.\n2. Proposes a systematic data construction method. The paper designs a complete data synthesis process: from real data collection, workflow abstract representation, data synthesis to quality control, with detailed design considerations at each stage. The resulting large-scale dataset provides an important resource for this field.\n3. Comprehensive experimental validation. The effectiveness of the method is strongly supported through multi-dimensional evaluation using CodeBLEU and Pass Rate, cross-domain testing with T-Eval, along with detailed case analysis and ablation studies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a pipeline for constructing workflow (RPA) data to enhance LLMs' workflow orchestration capabilities. The core motivation stems from the observation that real-world workflows require more complex logic and longer sequences than what current LLMs can directly generate. The primary contribution is the construction of a dataset containing over 100K samples. The main limitations lie in the lack of certain technical details (such as specific ChatGPT versions used for annotation) and the lack of deeper investigation into model capability improvements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Important technical details are missing. The paper mentions using ChatGPT for data annotation and optimization multiple times but doesn't specify the model version. For work with data construction as its core contribution, this seriously affects reproducibility and rigor.\n2. Lacks methodological innovation and mechanism analysis. Although data construction is an important contribution, the paper lacks in-depth analysis of how this data enhances model capabilities. Specifically, it doesn't investigate whether the improvements come from enhanced logical reasoning abilities or simply better format matching. Without such analysis, it's difficult to determine if the model has truly learned to reason about complex workflows or has merely memorized patterns from the training data.\n3. Missing critical ablation experiments. The paper doesn't compare the performance difference between the Annotator Model and WorkflowLLM, despite both models using the same training method and data type. This results in question the necessity of the data synthesis strategy and weakens the core argument.\n4. The paper mentions that the Annotator Model can generate workflows with over 70 actions, and theoretically WorkflowLLM should be capable of the same. Given the general limitations of LLMs in long sequence generation, including such long-sequence workflows in the appendix would further demonstrate the work's contribution.\n5. Given that the core contribution is data construction, whether the dataset will be open-sourced directly affects the practical value of the work. The authors should consider open-sourcing the dataset to promote development in this field." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024workflowllm,\ntitle={Workflow{LLM}: Enhancing Workflow Orchestration Capability of Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Hy00Wvabi},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in large language models (LLMs) have driven a revolutionary paradigm shift in process automation from Robotic Process Automation to Agentic Process Automation by automating the workflow orchestration procedure based on LLMs. However, existing LLMs (even the advanced OpenAI GPT-4o) are confined to achieving satisfactory capability in workflow orchestration. To address this limitation, we present WorkflowLLM, a data-centric framework elaborately designed to enhance the capability of LLMs in workflow orchestration. It first constructs a large-scale fine-tuning dataset WorkflowBench with 106,763 samples, covering 1,503 APIs from 83 applications across 28 categories. Specifically, the construction process can be divided into three phases: (1) Data Collection: we collect real-world workflow data from Apple Shortcuts and RoutineHub, transcribing them into Python-style code. We further equip them with generated hierarchical thought via ChatGPT. (2) Query Expansion: we prompt ChatGPT to generate more task queries to enrich the diversity and complexity of workflows. (3) Workflow Generation: we leverage an annotator model trained on collected data to generate workflows for synthesized queries. Finally, we merge the synthetic samples that pass quality confirmation with the collected samples to obtain the WorkflowBench. Based on WorkflowBench, we fine-tune Llama-3.1-8B to obtain WorkflowLlama. Our experiments show that WorkflowLlama demonstrates a strong capacity to orchestrate complex workflows, while also achieving notable generalization performance on previously unseen APIs. Additionally, WorkflowBench exhibits robust zero-shot generalization capabilities on an out-of-distribution task planning dataset, T-Eval." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Process Automation", "Workflow", "Tool Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/61b261072d2e135d1cbe1c150891ef1b3f631413.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1d0fe4cb519a7fc7de0de5e56b9068fb29b1e1a4.zip" }, "title": { "value": "WorkflowLLM: Enhancing Workflow Orchestration Capability of Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3IFRygQKGL
OptionZero: Planning with Learned Options
main
Active
Option;Semi-MDP;MuZero;MCTS;Planning;Reinforcement Learning
reinforcement learning
6;6;8;8
4;3;4;3
3;3;4;3
2;3;4;4
3;3;4;3
7
3.5
3.25
3.25
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Clerical: In section 5.2 the $l_{1}$ option setting is mentioned as a baseline, but Table 1 compared the options to something called $l_{0}$. Do $l_{0}$ and $l_{1}$ refer to the same baseline? If so, using consistent notation will help make the results section more readable." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The need for efficiency in decision making in RL is clear, as single-step actions are slow and computationally expensive (even more so in slow simulators). Thus, the problem addressed by OptionZero is clear and its existence is well-motivated. Additionally, since much prior work in options appear to be in manually defined and demonstration-based settings, the generalisability of OptionZero is a strong selling point.\n\nWithin fixed computational constraints, the idea of decreasing the frequency of queries to the network is a strong idea for the current state of RL. Also important is the notion of learning subroutines which the options network will identify as useful in different scenarios and not have to re-learn temporal relationships.\n\nThe flexibility to play options or primitive actions results in tailored reactions to scenarios, as an agent may need the fine-grained approach taken by traditional RL. The main results in Table 1 indicate the validity of the method, as using options provides a performance benefit more often than not, with longer option limits sometimes outperforming shorter ones." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work proposes a revamped approach to the well-known options framework, which allows agents to take temporally extended actions as well as myopic ones. By combining a network which learns options with MCTS MuZero (which models transition dynamics) the authors propose a method to utilise options alongside single actions in self-play games." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It is unclear why options are outperformed by primitive actions in certain environments. The authors suggest that in environments with high combinatorial complexity, learning of the dynamics model may be difficult and thus options may simply produce more overhead than actual benefit. A more detailed analysis of these environments would be beneficial, for e.g. investigate whether there is a correlation between the stochastic branching factor of the environment and the performance of options.\n\nAdditionally, it seems that longer options may improve efficiency but not always increase performance when those options may be overextending in environments where more granular control is required. Have the authors considered implementing dynamic options lengths somehow? This may make the idea more viable, or at least a discussion on the complications of implementing that would be a good addition to the work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why select these 26 Atari games instead of using the standard 57 Atari games?\n1. What is the hardware resource for conducting the experiments?\n1. How long does training a single Atari game with OptionZero take?\n1. How long does training a single Atari game with MuZero take?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written. The authors explain the use of options clearly with a toy example, demonstrating how options are used. The empirical results are also strong, achieving high mean normalized scores." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents OptionZero, a novel approach that incorporates an option network into MuZero and allows the agent to learn temporary extended actions. The authors conducted empirical experiments and find OptionZero outperforms MuZero in teams of mean human-normalized scores on 26 Atari games." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It's not clear the actual benefits options bring. In the intro, the paper claims options allow for \"searching deeper\", but the empirical analysis shows \"deeper search is likely but not necessary for improving performance\". While it's nice to have the option to do option, could the authors provide a more detailed analysis of options beyond a deeper search? \n\nThe paper could also benefit more from discussions of \n1) the trade-offs between increased complexity and performance gains\n2) how much tuning did the authors perform to make OptionZero work; were there failure cases/ideas during the development and how did the authors overcome them?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does the dynamics network handle complex action spaces, especially in games with highly varied option paths?\"\n\n- What are the specific computational costs of incorporating the option network, both in training and during MCTS simulations? Could the author discuss the associated overhead with the proposed method?\n\n- Can the model be applied to environments beyond games with less predictable state transitions, and how would option discovery be affected? I would suggest adding studies in robotic environments." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Novel idea for autonomous and adaptable option discovery. The proposed method's ability to autonomously discover and tailor options to diverse game dynamics removes the need for predefined actions, making it highly adaptable across different environments.\n\n- Convincing results for enhanced planning in RL. By integrating an option network, OptionZero reduces decision frequency, enabling computational efficiency, particularly in visually complex tasks like Atari games.\n\n- Strong Performance Gains: On Atari benchmarks, OptionZero achieves a 131.58% improvement over MuZero, a significant improvement compared to previous SOTA papers.\n\n- Interesting ideas to adjust option lengths, balancing performance and training efficiency, particularly useful in tasks needing variable action sequences." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce OptionZero, an advanced extension of the MuZero algorithm that autonomously identifies temporally extended actions, known as options, without the need for predefined options or expert-provided demonstrations. By incorporating an option network, OptionZero enhances planning efficiency by decreasing both decision-making frequency and the computational load required for complex environments. Evaluated on a suite of Atari games, OptionZero demonstrates notable improvements in human-normalized scores compared to MuZero, accompanied by a detailed analysis of how options are utilized across varying game states and scenarios" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Inconsistent Option Use Across Games: OptionZero's reliance on options appears to vary widely across Atari games. While longer options bring substantial gains in some games, they contribute less in others. This inconsistency suggests that the model’s option-based planning may struggle to generalize well across diverse, complex environments. The paper should discuss this limitation. \n\n- Challenges in Complex Action Spaces: In games with intricate action spaces, such as Bank Heist (Atari), OptionZero’s dynamic network encounters difficulty as option lengths increase, particularly with multi-step dependencies. This issue may restrict OptionZero’s application in environments where actions are highly combinatorial, relying instead on settings with more straightforward or predictable actions.\n\n- Reduced Prediction Accuracy for Longer Options: The model’s prediction accuracy tends to decrease as options become longer, affecting planning quality where extended strategies are essential. I would recommend adding an experiment to study this effect and discuss potential limitations of the proposed method. \n\n\n- Limited Application Beyond Games: Although the model shows promise in game environments, the paper does not investigate its potential beyond Atari-like settings. I would appreciate seeing results on other domains, maybe robotic such as Gymnasium-Robotics. Under the current evaluation, the method seems limited to game-based scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please refer to the weakness part." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. This paper is well-written with a clear structure, making the research content easily comprehensible.\n\n2. By introducing the OptionZero framework and leveraging self-play for automatic option design, this study paves the way for new approaches. The novelty is good.\n\n3. The experimental results robustly support the effectiveness of the algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the OptionZero framework, which incorporates options into MCTS and enables the automatic design of options through self-play, thereby avoiding cumbersome manual design. In Atari games, OptionZero achieves significant improvements compared to MuZero, achieving a 131.58% improvement in mean human-normalized score. It has shown promising directions for discovering and using options in planning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I am curious about whether the introduction of options has any impact on the theoretical optimality of MCTS.\n\n2. Is it possible to demonstrate the advancement of the algorithm more directly by solely utilizing the learned network for action selection, without relying on MCTS?\n\n3. What is the expected performance as the value of $l$ increases?\n\n4. What are the differences in the running wall-clock time required for OptionZero compared to MuZero?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper presents OptionZero, a method that integrates options into the MuZero algorithm, which autonomously discovers options through self-play games and utilizes options during planning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024optionzero,\ntitle={OptionZero: Planning with Learned Options},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3IFRygQKGL},\nnote={under review}\n}" }, "abstract": { "value": "Planning with options -- a sequence of primitive actions -- has been shown effective in reinforcement learning within complex environments. Previous studies have focused on planning with predefined options or learned options through expert demonstration data. Inspired by MuZero, which learns superhuman heuristics without any human knowledge, we propose a novel approach, named OptionZero. OptionZero incorporates an option network into MuZero, providing autonomous discovery of options through self-play games. Furthermore, we modify the dynamics network in MuZero to provide environment transitions when using options, allowing searching deeper under the same simulation constraints. Empirical experiments conducted in 26 Atari games demonstrate that OptionZero outperforms MuZero, achieving a 131.58% improvement in mean human-normalized score. Our behavior analysis shows that OptionZero not only learns options but also acquires strategic skills tailored to different game characteristics. Our findings show promising directions for discovering and using options in planning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Option", "Semi-MDP", "MuZero", "MCTS", "Planning", "Reinforcement Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2c03adaa2af6d28cea5615c047befbd4055e5098.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/13604cd8e5c1d54b2c3b5bc6b872907fd07b5235.zip" }, "title": { "value": "OptionZero: Planning with Learned Options" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Imf21Jvwh
Hard-Constrained Neural Networks with Universal Approximation Theorem
main
Active
constrained optimization;universal approximation;surrogate models
optimization
3;5;6;6
4;5;3;3
2;3;3;3
1;3;3;3
3;2;2;3
5
3.75
2.75
2.5
2.5
-0.492366
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As I liked a lot the paper, here I also include a series of suggestions to further improve the paper:\n\n- At page 4, shouldn't the sup-norm be defined as $||f||_\\inf = sup_{x \\in \\mathcal{X}} |f(x)|$?\n- At page 5, I think it would have been great to have a small example with just a neural network with two outputs $y_0$ and $y_1$ and the constraint $y_0 \\ge y_1$. Then you could for example show that if $y_0 = 3$ and $y_1 = 4$ then $a(x)=[−1,1]$, $b(x) =0$ and \n$$\n\\mathcal{P}(f_\\theta)(x) = f_\\theta(x) - \\frac{a(x)}{\\||a(x)\\||^2} \\text{ReLU}(a(x)^\\top f_\\theta(x) - b(x)) = [3.5, 3.5].\n$$\n- At page 5, among the assumptions there is written that the constraints need to be feasible. Just to improve the readability of the paper and also make sure everything is well defined, it would help to add the meaning of the word feasible (i.e., \"that there exists at least one solution or point within the domain of interest that satisfies all the constraints simultaneously\")\n- At page 5 the authors give the assumptions for which the number of constraints needs to be lower or equal than $n_{out}$. I think it would be really helpful to add a simple example with a set of constraints that cannot be captured (e.g., $x \\ge 0, y \\ge 0, x+y \\ge 0$)\n- At page 8, in the experimental analysis, the constraints you show clearly define a non-convex space. However, for your layer to work you need to have a set of constraints that defines a convex space. Are you simply applying different projections on the ground of the value of $x$? If that is the case, I personally find this experiment a bit misleading as this only works because $x$ is a known input. I do not think your layer would work in a setting where you have a constraint of the type if $y_1 > y_2$ then $y_2+ y_3 < 1$, tight?\n- Finally, I think it would also be nice if you could extend on how this type of work is relevant for the SafeAI literature, as creating models that are complaint by design with a set of constraints obviously increases their safety." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "In general, I like this paper quite a lot and I think it has different strengths (listed below):\n\n- The paper handles a very important problem. \n- The paper is technically sound \n- The paper is written very well" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The proposes a new layer able to project the output of the neural network (which might be non-compliant with a set of hard constraints) back into a \"safe space\" where the constraints are guaranteed to be satisfied." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has only one major flow: it is not well placed in the literature. \n\nIndeed, I do not think the authors are familiar with the Neuro-symbolic AI literature, where the problem of learning with hard constraints has already been studied. In particular, there is a research group that has worked a lot on creating layers that are make neural networks compliant by construction with a set of hard constraints [1,2,3]. [1] is the first work that proposed this kind of approach with constraints expressing hierarchies over the outputs. In the latest works they worked with hard constraints expressed in propositional logic [2] and as linear inequalities [3]. Obviously I believe [3] is particularly relevant to your paper and it would be nice to have a comparison between the two methods (at least in terms of discussion for the rebuttal phase and experimental only for the camera ready). Delving more on the logical side you have works like Semantic Probabilistic Layer that gives a probabilistic perspective to hard constraints expressed in propositional logic and can guarantee their satisfaction by construction [4]. Finally, you can find an entire line of work which maps the outputs of the neural network into logical predicates and allows reasoning on top of these predicates (see e.g., [5,6,7]) which then also guarantees the satisfaction of the constraint. \n\nThe final rate is below the acceptance threshold because of this. However, I am fully aware that it is often very hard to keep up with the extensive literature available in ML, so I will be very open to increasing my score. \n\nReferences:\n\n[1] Eleonora Giunchiglia and Thomas Lukasiewicz. Coherent hierarchical multi- label classification networks. In Proc. of NeurIPS, 2020.\n\n[2] Eleonora Giunchiglia, Alex Tatomir, Mihaela Catalina Stoian, and Thomas Lukasiewicz. CCN+: A neuro-symbolic framework for deep learning with requirements. International Journal of Approximate Reasoning, 171, 2024.\n\n[3] Mihaela C. Stoian, Salijona Dyrmishi, Maxime Cordy, Thomas Lukasiewicz, and Eleonora Giunchiglia. How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data. In Proceedings of International Conference on Learning Representations, 2024.\n\n[4] Kareem Ahmed, Stefano Teso, Kai-Wei Chang, Guy Van den Broeck, and Antonio Vergari. Semantic probabilistic layers for neuro-symbolic learning. In Proceedings of Neural Information Processing Systems, 2022.\n\n[5] Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. DeepProbLog: Neural probabilistic logic programming. In Proceedings of Neural Information Processing Systems, 2018.\n\n[6] Connor Pryor, Charles Dickens, Eriq Augustine, Alon Albalak, William Yang Wang, and Lise Getoor. Neupsl: Neural probabilistic soft logic. In Proceedings of International Joint Conference on Artificial Intelligence, 2023.\n\n[7] Emile van Krieken, Thiviyan Thanapalasingam, Jakub M. Tomczak, Frank Van Harmelen, and Annette Ten Teije. A-neSI: A scalable approximate method for probabilistic neurosymbolic inference. In Proceedings of Neural Information Processing Systems, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- I am a little confused about part iii) of Proposition 4.2. Namely, the projection preserves the distance from the boundary of the feasible set when $\\bar{f}_\\theta (x)$ satisfies the constraint. Would you mind sharing a geometric intuition?\n- I am also confused about the $C_\\leq (f (x))$ notation in line 359. What does $C$ denote? This is different from the $C (x)$ in Eq.(4), right?\n- Regarding Figure 2, it looks like all models perform reasonably good in the region which the training data lie in, and the difference occurs outside of data coverage. I am confused why \"Soft\" seems much worse than others. If I understood it correctly, \"Soft\" penalizes when the model output violates the constraints. Since all training points are feasible, I intuitively expect \"Soft\" to behave similarly to \"NN\", but this is not the case. Could you please explain why such difference? Also, how would \"Soft + proj\" look like?\n- For the \"safe control policy\" experiment in Section 5.3, what do you think is the biggest advantage of the proposed method compared with non-learning methods such as model predictive control?\n- Line 500 mentions that the constraint in Eq.(12) can be conservative, leading to worse performance compared to \"Soft\" and \"DC3\". Is it possible to adjust the level of conservativeness by changing $\\alpha$?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I have not worked on constrained neural networks, and hence I am unfamiliar with a lot of the cited literature. That being said, judged based on the content of this submission, the results are promising and meaningful, and the presentation is mostly clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a type of hard-constrained neural network by introducing differentiable projection layers. Specifically, if the constraints are affine and the number of constraints are no greater than the output dimension, the projection can be found in closed form. For other convex constraints, the authors propose to apply the differentiable optimization framework to compute the projection iteratively. The authors use experiments including learning an optimization solver and controlling a safety-critical dynamical system to demonstrate the effectiveness of the proposed work." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- To use the closed-form projection algorithm Eq.(7), we need $n_{ineq} + n_{eq}$ to be no greater than the output dimension. Is this restrictive in practice? In the included experiments, which one of them uses closed-form projection?\n- For Eq.(11), should $u$ also be a function of $t$, i.e., $u (t)$?\n- In Table 4, why are rows 2 and 4 marked as red even though they are feasible?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Could the authors train DC3 for longer, or with more inner iterations to satisfy the inequality constraints? If this is deemed infeasible, can the authors provide an explanation on the discrepancy with the results in the original paper?\n- Would it be possible to provide \"DC3 + Proj\" results?\n- Why is DC3 absent from Table 5?\n- In the toy example, training points are sampled from [-1.2, 1.2], but then the networks are evaluated on [-2, 2]. Aren't samples in that area OOD, in a sense? Couldn't that explain the performance of the baselines? I understand that guaranteed constraint satisfaction is an advantage of the proposed approach, but these points should be discussed. (e.g., by providing results on [-2, 2] training)\n- What is lost by the fact that HardNet-Aff does not rely on an orthogonal projection? Does this imply anything concerning the hardness of learning the function through gradient-based method? An interesting ablation would be to compare HardNet-Aff with HardNet-Cvx on a setup where both are supported.\n- It would be interesting to see some experiments on (even slightly) larger networks. Would some methods benefit more from the additional capacity than the others?\n\nIn general, I think the quality of the work would clearly increase if the authors were more honest on the limitations of the proposed approach (see weaknesses above)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The idea to enforce hard constraints by construction through a projection layer is simple and neat. \nDifferently from previous work in the area, universal approximation theorems are provided.\nThe experiments show that, at least for affine constraints supported by HardNet-Aff, HardNet works quite well in practice (albeit at a small scale).\nFinally, I found the related work section to be well-written and fairly comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents HardNet, an approach to train neural networks that satisfy hard constraints by construction.\nThe core idea of the paper is to append a projection layer at the end of the network in order to bring the network output onto the feasible set.\nTwo different schemes are presented: one using a closed-form (non-orthogonal) projection for affine constraints, and one resorting to previous work presenting differentiable convex optimization solvers, in case of more general convex constraints.\nUniversal approximation theorems for the architectures are presented.\nExperimental results on a variety of benchmarks are presented, demonstrating that HardNet attains good performance while satisfying the constraints." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weaknesses of the paper are threefold: HardNet-Cvx, the assumptions behind HardNet-Aff, and the experimental section.\n\n*HardNet-Cvx*: the idea to use differentiable optimizers to perform the projection does not appear to be completely novel. DC3 discusses it in related work, excluding it because of large computational cost (this large cost is definitely confirmed at inference in the experiments in Table 5). The rayen paper uses it as a baseline (named PP in their paper). I do not know if the authors are aware of this, but these points absolutely need to be acknowledged throughout the paper. Furthermore, the only example over HardNet-Cvx is used (Table 5) appears to nevertheless use affine constraints (albeit, as far as I understand, too many to be supported by HardNet-Aff). In this instance, its runtime is extremely large, questioning its practical applicability.\n\n*HardNet-Aff assumptions*: the assumptions required for HardNet-Aff seem very strong to me. It seems to be that a simple interval constraint per network output coordinate would already be unsupported, hence incurring the large cost associated to HardNet-Cvx. Could the authors comment on this?\n\n*Experiments*: my main concern over the experimental section is the surprisingly bad performance of DC3. In the original paper, all constraints appear to be satisfied in practice. Is there anything I am missing here? Was DC3 run for an insufficient number of iterations? I understand that for HardNet the constraints hold by construction, but DC3 appears to be fairly strong empirically, in the original paper. Important details such as training times for each scheme appear to be omitted (or at least, do not feature prominently). \"DC3 + Proj\" would also appear to be a missing, yet very interesting baseline. Further details are provided as questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I'd like to explain more in detail my doubts.\nConsider the simple case of $1-d$ output and a single affine constraint. The projection layer reduces a simple rescaled ReLU, and the whole network has zero gradient where the constraint is not satisfied. \n\nThis effect is true in general. In fact, if we evaluate the Jacobian of the projection layer when the constraint is not satisfied ($J_{\\mathcal{P}} = I-a(x)a(x)^T$), we can see that the gradient of the network will be always orthogonal to the constraint vector $a(x)$. \n\nFor this reason, if $f_\\theta$ is initialized outside the feasible region, it should be impossible to \"re enter\" it by simply following the gradient. This means that the whole optimization would get \"stuck\" on the boundary of the feasible set, which might not be ideal. In stochastic gradient descent, this issue might be mitigated, however, i believe this is an important discussion to have in the paper.\n\nThe proposed projection (for the affine variant) works in two steps, reducing the dimensionality of the output space using the equality constraints, and performing a projection in the reduced space. Is this better than simply treating equalities as a pair of inequalities? This aspect should be investigated to justify the additional complexity of the method." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is very clear in its presentation. It is well structured and reads well. The contributions of the paper are clearly presented and well summarized, both in text and by images and tables. The empirical evaluation include plenty of relevant and diverse scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a simple framework for imposing constraints on input-output relations in neural networks.\nThe approach consists in appending a final projection layer to the network, ensuring that the constraints are satisfied by construction. Moreover, the authors show (formally) that this projection operation does not hinder the expressivity of the network, and empirically evaluate the approach on various scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have some doubts regarding the novelty of the paper and the technical discussion. \n\nThe idea of satisfying hard constraints using a final differentiable projection layer is mentioned in the works cited as reference, and many other works referencing the idea exist (i.e. https://arxiv.org/abs/2111.10785 and https://arxiv.org/abs/2307.10459). If the contribution is simply the extension of the idea to input dependent constraints, it should be more clearly stated.\n\nAs for the soundness of the approach, while the additional projection layer is differentiable, its derivative is not well behaved.\nClaims such as \"meeting the required constraints while allowing its output to be backpropagated through to train the model via gradient-based algorithms\" and \"This allows us to project the output $f_θ(x)$ onto the feasible set $C(x)$ and train the projected function via conventional gradient-based algorithms\" are not substantiated by a proper discussion on the gradient properties of the resulting network.\nIn fact, as presented, the gradient is always orthogonal to the constraint. \nThis observation is not novel, and from my understanding is the main motivation driving the development of alternatives to projection methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024hardconstrained,\ntitle={Hard-Constrained Neural Networks with Universal Approximation Theorem},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Imf21Jvwh},\nnote={under review}\n}" }, "abstract": { "value": "Incorporating prior knowledge of input-output relationships into machine learning models has gained significant attention, as it enhances generalization from limited data and ensures trustworthy predictions. However, most existing approaches use soft constraints by penalizing violations through regularization, which offers no guarantee of constraint satisfaction---a critical requirement in safety-critical applications. On the other hand, imposing hard constraints on neural networks may hinder their representational power, adversely affecting performance. To address this, we propose HardNet, a practical framework for constructing neural networks that inherently satisfy hard constraints without sacrificing model capacity. Specifically, we encode affine and convex hard constraints, dependent on both inputs and outputs, by appending a differentiable projection layer to the network’s output. This architecture allows unconstrained optimization of the network parameters using standard algorithms while ensuring constraint satisfaction by construction. Furthermore, we show that HardNet retains the universal approximation capabilities of neural networks. We demonstrate the versatility and effectiveness of HardNet across various applications: fitting functions under constraints, learning optimization solvers, optimizing control policies in safety-critical systems, and learning safe decision logic for aircraft systems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "constrained optimization", "universal approximation", "surrogate models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d0ad082c6f735a75a6663071293d2c5603bb8de6.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Hard-Constrained Neural Networks with Universal Approximation Theorem" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3JfvvuPXsH
PointRecon: Online 3D Point Cloud Reconstruction via Ray-based 2D-3D Matching
main
Active
3D Reconstruction
applications to computer vision, audio, language, and other modalities
1;3;5;5
4;3;4;4
3;2;3;2
2;2;2;2
3;2;3;2
3.5
3.75
2.5
2
2.5
0.174078
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I will use this questions paragraph for questions/suggestions.\n\nRegarding the dependency on accurate camera poses:\n- How can the proposed method be integrated with SLAM or other online techniques?\n- How does the method work with imperfect poses? An experiment on this would be beneficial.\n\nRegarding the weak experiments: \n- Perform experiments on other datasets. The authors can find many in the cited papers.\n- Compare to NICER-SLAM.\n- Compare to the vast literature of photometric stereo with known poses; few examples [m1-m4].\n\nAll in all, I don't feel it is realistic for the authors to fix all my issues within the discussion period." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I don't find any obvious strengths in the paper. The method has many unjustified steps that completely ignore prior work. The experimental section is very weak, with the baselines outperforming the proposed method in many metrics, with the proposed method being quite slow even though it was sold as an online method, and with having results only on a single dataset." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for estimating the 3D structure from an image sequence given known camera poses. The method incrementally adds images into the reconstruction, further optimizing the visible parts. Each image is encoded through a transformer-based encoder and feature pyramid. Then, monodepth prediction is applied to the first image in the sequence. Later, the 3D points are adjusted along their rays based on the new images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major comments:\n- The 2D-3D matching approach appears highly dependent on precise camera pose estimation. Sampling along a ray and projecting back to the camera to confirm matches assumes a high degree of pose accuracy, as even minor angular deviations can significantly affect the rays, especially for distant points. Achieving such accuracy is difficult in incremental or real-time systems without extensive bundle adjustment rounds. Consequently, this method can only be applied once pose estimation (and potentially a full reconstruction) is complete. If so, the rationale for incremental processing and online capability in the proposed pipeline is unclear, as an offline method would first need to process the entire sequence. Moreover, image matching with known poses is well-explored, with numerous relevant publications that the authors seemingly overlook. Without being exhaustive, here are a few examples [m1-m4]. Actually, a large portion of the multi-view stereo literature addresses matching, making the proposed appear excessively complicated without clear justification (especially, since the main reason, i.e. being incremental, does not seem to make sense, as I write earlier).\n- The experiments are very weak. Showing results on a single dataset, while all other baselines work well on others as well, is clearly insufficient. Also NICER-SLAM [j] is missing that also only use RGB. \n- Table 1: Throughout the paper, the authors describe their method as \"online\", but it runs at ~1 FPS, which does not truly qualify as online. Additionally, its accuracy and precision are lower than methods that are nearly an order of magnitude faster.\n- Table 2: The same observations apply here as for Table 1. In many metrics, the proposed method is underperformed by significantly faster alternatives.\n- Conclusion: \"Experiments show that our approach achieves state-of-the-art performance.\" This statement is inaccurate. While some metrics may show strong performance, others reveal that it lags behind baseline methods.\n- The assumption of known camera poses should be stated upfront (e.g., in the abstract). The current wording suggests that the authors address both geometry and pose estimation by using the term \"reconstruction\" which is not true. \n\nMinor comments:\n- Experiments: Although the authors opt not to use the depth channel, it would still be informative to show comparative results with methods that do, such as [a,b,c,d], since the ScanNet dataset includes this data. This comparison would help readers understand how RGB-only performance currently compares to RGB-D methods.\n- L053: Missing related work on volumetric methods: [a,b,c,d].\n- L066: Missing related work: [e,f].\n- Paragraph at L141: Not all volumetric methods require a predefined grid [c].\n- Several methods for image ray to 3D point matching are entirely ignored: [g,h,i].\n- Fig.2: The image is very dark; consider using a different image or adjusting the visualization for better clarity.\n- L130: \"volume .\" -> \"volume.\"\n\n[a] Oleynikova, H., Taylor, Z., Fehr, M., Siegwart, R. and Nieto, J., 2017, September. Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1366-1373). IEEE.\n\n[b] Grinvald, Margarita, et al. \"Volumetric instance-aware semantic mapping and 3D object discovery.\" IEEE Robotics and Automation Letters 4.3 (2019): 3037-3044.\n\n[c] Zheng, J., Barath, D., Pollefeys, M. and Armeni, I., 2025. Map-adapt: real-time quality-adaptive semantic 3D maps. In European Conference on Computer Vision (pp. 220-237). Springer, Cham.\n\n[d] Miao, Y., Armeni, I., Pollefeys, M. and Barath, D., 2024. Volumetric semantically consistent 3d panoptic mapping. IROS 2024\n[e] Wang, S., Leroy, V., Cabon, Y., Chidlovskii, B. and Revaud, J., 2024. Dust3r: Geometric 3d vision made easy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20697-20709).\n\n[f] Leroy, V., Cabon, Y. and Revaud, J., 2024. Grounding Image Matching in 3D with MASt3R. ECCV 2024\n\n[g] Chen, B., Parra, A., Cao, J., Li, N. and Chin, T.J., 2020. End-to-end learnable geometric vision by backpropagating pnp optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8100-8109).\n\n[h] Zhou, Q., Agostinho, S., Ošep, A. and Leal-Taixé, L., 2022, October. Is geometry enough for matching in visual localization?. In European Conference on Computer Vision (pp. 407-425). Cham: Springer Nature Switzerland.\n\n[i] Wang, S., Kannala, J. and Barath, D., 2024. DGC-GNN: Leveraging Geometry and Color Cues for Visual Descriptor-Free 2D-3D Matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20881-20891).\n\n[j] Zhu, Z., Peng, S., Larsson, V., Cui, Z., Oswald, M.R., Geiger, A. and Pollefeys, M., 2024, March. Nicer-slam: Neural implicit scene encoding for rgb slam. In 2024 International Conference on 3D Vision (3DV) (pp. 42-52). IEEE.\n\n[m1] Goesele, M., Snavely, N., Curless, B., Hoppe, H. and Seitz, S.M., 2007, October. Multi-view stereo for community photo collections. In 2007 IEEE 11th International Conference on Computer Vision (pp. 1-8). IEEE.\n\n[m2] Žbontar, J. and LeCun, Y., 2016. Stereo matching by training a convolutional neural network to compare image patches. Journal of Machine Learning Research, 17(65), pp.1-32.\n\n[m3] Scharstein, D. and Szeliski, R., 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision, 47, pp.7-42.\n\n[m4] Furukawa, Y. and Ponce, J., 2009. Accurate, dense, and robust multiview stereopsis. IEEE transactions on pattern analysis and machine intelligence, 32(8), pp.1362-1376." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses part." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Using point clouds as a representation for the scene is more scalable and generalizable compared to voxel and implicit surface representations.\n\n2. Loosening epipolar constraints could potentially improve performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a real-time scene reconstruction approach from multi-view images, with a point cloud scene representation. When a new image is introduced, the global scene is optimized by adjusting the locations of existing points, adding new points, and removing redundancies. This process is achieved through a ray-based and learning-based 2D-3D matching technique. Although the method achieves high accuracy, it is more time-consuming and lacks distillation experiments to verify the effectiveness of the key designs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although this method improves reonstruction accuracy compared to baseline methods, it is $6\\times$ more time-consuming, with a processing time of 0.6 s/frame, which limits its feasibility for online reconstruction.\n\n2. In Line 243, the authors mention adjusting only visible points, which can lead to gaps at the edges between visible and non-visible regions. Although scene normals are supervised during training, there is no test-time guarantee to avoid this issue. \n\n3. Individually predicting an offset for each point in the scene adjustment introduces local noise, as seen in Figure 5.\n\n4. There is a lack of detailed ablation studies and analysis on the ray-based matching method (comparing with point-based) and the relaxation of epipolar geometry constraints.\n\n5. As an online reconstruction method, it is necessary to provide hardware testing information and comparisons of memory usage.\n\n6. The paper is somewhat hard to read and follow." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Line 244: How is visibility determined?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The motivation is sound: the authors address the limitations of monocular depth estimation, introducing multi-view matching to improve depth prediction and ensure consistent 3D reconstruction.\n\n- Extensive experiments benchmark the method against both offline and online approaches with different scene representations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method for online 3D reconstruction from monocular RGB images. The proposed method maintains a global point cloud, as the 3D representation, by dynamically adjusting, adding, or removing 3D points as new frames arrive. The 3D point update is achieved through a ray-based 2D-3D matching technique, which projects 3D points along rays to another view to gather multi-view information to refine depth predictions along camera rays. The proposed method is evaluated against various prior methods on the ScanNet dataset," }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper would benefit from a high-level overview explaining (1) how the method is initialized on the first frame and (2) how the global representation is iteratively refined with each new frame before detailing individual steps.\n\n- Depth updates rely on a single new image at each step, which contradicts with most multi-view 3D reconstruction methods that integrate multiple views simultaneously. Using only one view at a time has potential drawbacks: 1) Reduced robustness in homogeneous regions compared to multi-view approaches; 2) Limited co-visibility, impacting point matching quality; 3) Suboptimal performance in extreme depth ranges, as the baseline (i.e. the distance between two cameras) is fixed. Could the authors clarify this design choice? I wonder if the less smooth reconstructions observed in the experiments relate to this limitation.\n\n- Since the method relies on stereo feature matching, the view-independent color jittering may negatively impact matching quality. \n\n- While the point cloud is lightweight, the final 3D reconstruction depends on an algorithm to convert the 3D point cloud to the underlying 3D surfaces. The authors use TSDF Fusion, which inherits its limitations in accuracy and resolution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "/" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Geometric Metadata Selection: How were metadata elements chosen? Was there an ablation study or reference to prior work guiding the selection process?\n\n- View-Dependent Confidence: Shouldn’t confidence values for each point be view-dependent? For example, a 3D point visible from one viewpoint would have higher confidence in that view but lower confidence if occluded. The current approach to learning confidence seems unclear, particularly regarding occlusions. For instance, suppose two points are aligned along the line of sight from two cameras, where one point is visible in one camera but occluded in the other. The learned confidence may end up equal, averaging the depth between points and failing to handle occlusion naturally. Could you clarify how this method correctly handles occlusions?\n\n- Dataset Generalization: The evaluation is primarily based on the ScanNet dataset. Would PointRecon generalize effectively to outdoor or unstructured environments? What specific challenges might it face in these settings?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Global Consistency: The approach maintains a unified point cloud representation, enhancing consistency across views compared to independent depth predictions.\n\n2. Efficiency in Memory: By using a sparse point cloud approach, PointRecon avoids high memory demands, unlike volumetric methods.\n\n3. Flexibility and Resolution Independence: The point-based approach is free from fixed voxel size constraints, offering flexibility for detailed reconstructions.\n\n4. Competitive Performance: It demonstrates comparable or superior performance to current methods in both depth and mesh quality metrics on ScanNet." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces PointRecon, an online 3D point cloud reconstruction method that builds a 3D scene representation from monocular RGB video inputs. PointRecon employs a ray-based 2D-3D matching technique, allowing 3D points to adjust their locations along camera rays from newly observed images without predefined voxel resolutions. Once new points are added, the method integrates a point cloud merging technique based on learned confidence values, resulting in a globally consistent point cloud representation. PointRecon demonstrates comparable performance to state-of-the-art methods on the ScanNet dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Unclear Advantage Over Prior Methods: The benefits of PointRecon over SimpleRecon or VisFusion, which achieve similar reconstruction quality with lower latency, are not fully evident. Further clarity on the advantages or potential benefits of this method would be valuable. In terms of quality, the proposed PointRecon is simialr to SimpleRecon or VisFusion; In terms of speed, PointRecon is slower than the aforementioned prior work. Is there any aspect that the proposed method has stronger potential than prior work?\n\n2. Latency: The method's sampling approach introduces relatively high latency per frame, particularly during scene adjustment and depth prediction.\n\n3. Noise in Output: The absence of post-processing smoothing results in noisier meshes compared to other approaches with more advanced smoothing techniques.\n\n4. Complexity in Implementation: Ray-based matching and multi-level attention mechanisms increase computational complexity, which may affect scalability. For example, if this method is employed for a larger scene, the ray-based matching computation complexity will increase as the number of rays in the scene increases. Could author test this by running this method on a larger scene, e.g. multi-room environment instead of a single room scene?\n\n5. Limited Justification for Ray-Based Matching: Although an ablation study highlights the value of key components, the core concept of \"ray-based matching\" could benefit from further justification. More comparisons with alternative methods, such as point-based or traditional epipolar line matching, would strengthen the argument for this approach. The authors could run on the same dataset and test the alternatives by switching the matching module." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024pointrecon,\ntitle={PointRecon: Online 3D Point Cloud Reconstruction via Ray-based 2D-3D Matching},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3JfvvuPXsH},\nnote={under review}\n}" }, "abstract": { "value": "We propose a novel online point-based 3D reconstruction method from a posed monocular RGB video. Our model maintains a global point cloud scene representation but allows points to adjust their 3D locations along the camera rays they were initially observed. When a new RGB image is inputted, the model adjusts the location of the existing points, expands the point cloud with newly observed points, and removes redundant points. These flexible updates are achieved through our novel ray-based 2D-3D matching technique. Our point-based representation does not require a pre-defined voxel size and can adapt to any resolution. A unified global representation also ensures consistency from different views. Results on the ScanNet dataset show that we improve over previous online methods and match the state-of-the-art performance with other types of approaches." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D Reconstruction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/99a9a2338fee735d6bd3b5c3a8c76ae55b717141.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "PointRecon: Online 3D Point Cloud Reconstruction via Ray-based 2D-3D Matching" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3JoLo0mmHH
Reverse the auditory processing pathway: Coarse-to-fine audio reconstruction from fMRI
main
Active
Brain-to-audio reconstruction;Coarse-to-fine;fMRI;Auditory processing pathway
applications to neuroscience & cognitive science
3;5;5;8
5;3;4;2
2;3;3;3
2;2;3;3
1;2;1;4
5.25
3.5
2.75
2.5
2
-0.939336
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "**Improving clarity and motivations**\n\n- The distinction between low-level (coarse) and high-level (fine-grained) features is confusing, lacks clarity and contributes negatively to the overall appreciation of the paper. Could the authors precise what they refer to as high-level or fine-grained features and as coarse or low-level features? It is confusing to refer to \"semantics\" as coarse, as they are supposed to provide very precise and high-level descriptive information. Similarly, the spectrogram is often referred to as high-level, although it describes low-level features similar to the features extracted by the cochlea (line 86).\n\n- The description of the CLAP training (with aligned textual descriptions and audio) (line 147) and the use of prompts for incorporating music genres and phenotype information (line 117), are considered as coarse features, why? \n\n- The paper refers to as \"inverse pathway of auditory processing\" (line 17). If it is inverse; shouldn't it go from high-level to low-level? seems that the method goes from coarse-to-fine ... \"AudioMAE latent feature as the fine-grained acoustic embedding of audio\" (line 182) isn't it coarse information?\n\n- Typo line 466 ->\"it's better\"\n\n- In line 52, what are the DNN features referred to? could the authors provide more explanations?\n\n\n**Methodology/Results:**\n\n- It is unclear how the Semantic Decoder extracts semantically rich information (line 137); is it validated somehow? \n\n- One of the baseline modes is a Transformer that goes from voxel space to mel-spectrogram space. Could the authors provide more information about this implementation, as it does not seem straightforward?\n\n- Could the authors provide some motivations for using ridge regression to model brain signals (referred to as $x$ in the paper)? does this consider the autocorrelation of brain cortical activation and using the structure and spatial organisation of the auditory cortex?\n\n- Regarding the generation process and the use of the latent diffusion model, how much do the authors consider providing the ground truth as input to help the model (figure 3)? Isn't the conditioning should be enough?\n\n- In tables 1, 2, and 3, are the results averaged across subjects? if yes could you provide the std? \n\n- In Table 1, why is C2F-LDM low for PCC and PSNR?\n\n- The objective of section 3.3. and 3.4 are not very clear, for instance, lines 457 to 460. More importantly, the results do not seem to go in the direction of the conclusion of the paper, e.g. in Figure 5. b. why does the fine-grained decoding perform better for almost all experiments? The hypothesis made in section 3.3, e.g. \"This could be attributed to participants being more focused on the content of the stories during fMRI signal collection, potentially disregarding the speaking style of the speaker.\" (line 453) is used to motivate the next section 3.4 but is relatively weak and unclear. In section 3.4, The Brain2Music dataset performs better without prompts, although the prompts are supposed to incorporate important information such as the music genre. Why use prompts, then? seems somehow superficial. \n\n**Data**:\n\n- what about variation in performance across subjects? are the models trained per sujects? or models are used across subjects?\n\n- how are the voxels in Table 4 extracted? Is it from a template? subject-based parcellation?\n\n- Are the datasets aligned functionally (on top of anatomical registration) so that similar voxels and auditory regions across subjects can be compared?\n\n- For all datasets, the audio clips seem to be between 1.5 and 4 seconds. Is it enough to aim at reconstructing audio from a very short fMRI temporal window? What about using larger windows?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The technical contributions of the paper are strong. The methodology, with the threefold approach, is well thought out and carefully implemented. Many implementation details are provided, and the notations are clear and easy to follow. This is a great point.\n\n- Figure 2 is clear and provides sufficient details to understand the intricate methodology\n\n- The new method is compared against various baselines and three openly available datasets. Also, the code is made public, and additional results are provided. This is a great point for the openness of science. \n\n- Various ablation studies are done to confirm the contributions of specific modules (for instance the diffusion reconstruction vs using the MAE decoder)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to achieve audio reconstruction from an fMRI brain signal via a coarse-to-fine approach. The idea is to replicate the audio processing stream in the human auditory cortex. The method is threefold: first, it uses a CLIP-based approach to extract an audio representation (low-level description) from the fMRI data signal. From these initial features, a high-dimensional description of the auditory feature is obtained via a guided AudioMAE. Finally, these high-level features are used as a condition of a Latent Diffusion model for mel-spectrogram reconstruction and, thus, audio reconstruction. The method achieves high results on three publicly available datasets, demonstrating strong performance for both low-level and high-level audio metric reconstructions, showing improvement compared to the direct reconstruction of mel-spectrogram and other high-level approaches which omit low-level features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- One weakness of the paper is the limited neuroscientific motivations for audio reconstruction as a way to understand auditory mechanisms (first paragraph of the introduction). It is not clear from the introduction or the conclusion how solving this task could help to understand better the auditory processes (line 28 \"This research contributes to the comprehension of the human auditory\nsystem\" ). Are these models generalisable between subjects? can a model trained on a single subject be used for another subject? can they be used to identify specific features related to language disorders? \n\n- Another major weakness in this study is the lack of clarity in the writing, which prevents us from understanding some of the motivations and results/discussion. The characterisation of \"high-level/low-level\" features and \"coarse/fine-grained\" features throughout the paper is very unclear. Some of these terms seem interchangeable in how they are employed; the paper would greatly benefit from a clear definition (see more related questions in the next section).\n\n- Some points of the methodology lack details and/or seem to diminish the results (see questions in the next section). For instance, the modelling of the brain signal with ridge regression, which overlooks the spatial autocorrelation of the brain signal and structure of the auditory cortices, is not motivated. Sections 3.3 and 3.4 lack clarity, it is not clear what they aim to achieve with respect to the general motivation of the paper, and the conclusion are not clear and even seem to diminish previous results (for instance Figure 5 and line 457 to 460) - see more questions below. \n\n- There are some important missing descriptions regarding the dataset and the training procedure. How is the data split between training and testing? is it subject-wise or dataset-wise? Are the same subjects used both for training and testing? functional alignment between subjects if direct comparison between them?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1)\tFor each of the metrics, it would be helpful to understand what a good value is, and if these are getting at all close to that value. Currently, it is difficult to interpret whether the observed differences are at all significant. For instance, for the semantic representations one could use a baseline score that is two different samples in time from the same audio clip. And maybe a PCC and PSNR score could be referenced to samples with additive Gaussian noise of various SNR levels at the waveform? \n\n2)\tListening to the supplement audio files, I am not so sure that the proposed method is actually doing something reasonable. The samples “sound” more natural (which is expected from including a diffusion model in the pipeline), but they are often completely different sounds than the initialized audio, essentially hallucinations. PSNR levels of ~15-18 seem like they might be noise, relative to the original sound. \n\n3)\tIn the decoding section, one of the experiments is on male/female decoding which is called a “semantic” task. However, I believe a decent amount of male/female decoding can be achieved from simply looking at the overall power spectrum of the sound (male voices are generally lower than female voices). Thus, this doesn’t seem particularly “semantic”. \n\n4)\tThe metrics are not defined enough in the main text (Lines 311-319). At a minimum, the acronyms need to be defined. For instance, what is “PCC” and “PSNR”? Although some audio researchers may be familiar with these measurements, a more general ICLR attendee may not. \n\n5)\tHow many repetitions of each sound are present in each dataset and are these averaged for the analysis? More generally, how is fMRI measurement noise taken into account for the analysis? I.e., is there a sort of “noise ceiling” defined on the reconstruction? \n\nSomewhat minor suggestions for specific lines: \n* Line 075-077: This sentence doesn’t make sense to me. How does the “high-dimensionality” play a role here? The main challenge of fine-grained decoding is the lack of resolution in time and the inherently noisy signal of the fMRI response. \n* Lines 086-088: These references seem out of place, as the decomposition of sound into difference frequencies in the cochlea dates much further back than these papers. Some good references might be work by Shihab Shamma (maybe https://ieeexplore.ieee.org/abstract/document/119739, or https://pubmed.ncbi.nlm.nih.gov/16158645/) , but I would encourage the authors to perhaps cite classic textbooks or review articles on auditory processing \n* Lines 147-152: Which features are used in CLAP? The final output features or an intermediate stage? This should be mentioned in this section. \n* Line 231: “unpatchify” is not defined." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper tackles an interesting and challenging problem of trying to decode auditory fMRI activity. I like how it ties back into classic ideas of course-to-fine reconstruction. It also takes advantage of many recently proposed auditory models, similarity metrics, and auditory datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work designs a system to reconstruct audio signals from human fMRI data collected while listening to natural sounds. The system separately handles fine-grained information (i.e., the precise timing and frequencies of the sounds) and course-grained information (semantic information such as the class of sound) by using different neural network features for each branch of the architecture. The authors evaluate their method on three different fMRI datasets and multiple different evaluation methods capturing the fine-grained and semantic nature of the sounds." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper combines many different pre-trained systems. This makes the methods used difficult to understand, but also makes it hard to evaluate where things might be going wrong, and which pieces are critical. Due to the unknown inherit biases of each system, it is difficult to see how the work could actually be useful for understanding the human auditory system.\n \n* As currently written, the presentation of results is confusing. The text should point to each set of results when they are discussed, rather than stating (line 357-359) that qualitative results are displayed in Tables 1 and 2. Additionally, it would be helpful if the columns were in some way separated or labeled for the “fine grained” vs. “semantic” measures." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Figure 5 is confusing. What are the ‘features’ that are input into the SVM? And given the results and discussion that there is ‘little semantic content in the semantic features’ (line 449) the initial claim that semantic prompts during decoding ‘[enhances] the quality of reconstructed audio’ seems perhaps exaggerated (lines 25-26). \n\nA lot of the text is pushed to the appendix, making the official 10 pages lacking in sufficient detail and discussion. \n\nA lot of the writeup feels pretty inside baseball to this reviewer (or perhaps this reviewer is just too far outside this particular game). e.g. CLAP is not explained (though there is a reference) until page 3." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Technically, the method introduced here is interesting and seems sophisticated. \n\nThe paper is well-written and makes use of highly relevant contemporary work—including studies done in machine learning and neuroscience—and uses this grasp of the literature to provide interpretations on different steps in their method. \n\nThe paper does a nice job of comparing many different methods in addition to their own." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new method for reconstructing audio signals from fMRI responses to sounds. Specifically, it proposes a course-to-fine decoding strategy in which fMRI responses from auditory cortex are decoded in a course-grained fashion into the semantic space of CLAP (which is not defined until page 3), and decoded in a fine-grained fashion into acoustic space.\n\nI would not recommend this paper for acceptance. While the high-level approach seems interesting, the purpose of integrating neural data seems lacking in motivation, and the use of pre-trained representations in both training and evaluation seem to present a major confound. \n\n\n\nFigure 5 is confusing. What are the ‘features’ that are input into the SVM? And given the results and discussion that there is ‘little semantic content in the semantic features’ (line 449) the initial claim that semantic prompts during decoding ‘[enhances] the quality of reconstructed audio’ seems perhaps exaggerated (lines 25-26). \n\nA lot of the text is pushed to the appendix, making the official 10 pages lacking in sufficient detail and discussion. \n\nA lot of the writeup feels pretty inside baseball to this reviewer (or perhaps this reviewer is just too far outside this particular game). e.g. CLAP is not explained (though there is a reference) until page 3." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of Clear Motivation: This reviewer got stuck at square one: what is the goal of this work? The authors don't tell us. If the goal were to understand brain function this work would proceed differently, for example separately analyzing primary versus non primary auditory cortex, asking which brain regions are best modeled by each component of the model, comparing decoded signals to human behavioral data, etc. Alternatively, perhaps it could be intended for BCI applications. But what would those applications be? If you have access to a person's auditory cortex then you would already have access to the sound they were hearing, so there would be no point trying to decode that information from the brain. A paragraph in the appendix (A.7) attempts to provide a motivation, but it doesn't make sense to me. For example it mentions that this work could \"aid individuals with voice disorders\". But if an individual had a voice disorder, one would want to generate the intended spoken information, not the heard information from auditory cortex.\n\nIt’s unclear how the findings here contribute to the ‘comprehension of the human auditory system’ as stated in the abstract (lines 27-29). While the coarse-to-fine method is inspired by the human auditory system, it is not quite a model of ‘each physiological structure of the auditory processing pathway’ (lines 96-97)\n\n2. Possible confound.\nThe results on many of the metrics suggest that the direct decoding methods better reconstruct audio than the C2F-LDM proposed here (Table 1, 2). The novel C2F-LDM method does improve on measures of FD, FAD, KL, and CLAP, but these results seem to present a confound. The reconstruction makes use of many model features from pre-trained models and are subsequently evaluated on their similarity to pre-trained model features. Thus would higher scores not be unsurprising?\n\n3. Is the fMRI data even relevant?\nThe paper should address the role of fMRI data and how it improves the methods described here. The majority of steps requires pre-trained models and predicting their ‘ground truth’ representations, but the optimal value of P=0.25 seems to suggest that using these ground truth representations without the neural data may improve reconstruction." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Questions:\n- Why was sex selected as the semantic class for the brain-to-speech case?\n- Just confirming that: semantic features of guidance = course-grained features in the CLAP space? (might be helpful to state this more explicitly)" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Overall, this paper represents a novel improvement for brain-to-audio decoding and its acknowledgement would benefit the advancement of the field. The approach introduced is novel, performance state-of-the-art, and the presentation clean." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a coarse-to-fine framework for audio reconstruction from fMRI brain recordings that outperforms the leading solely fine-grained approaches. This approach draws inspiration from the hierarchical processing found in the human auditory system. It first projects the audio and fMRI into a coarse-grained semantic embedding in the CLAP (Wu et al, 2023) space before paring that embedding again with the fMRI signal for fine-grained decoding, generating features in the space of AudioMAE (Huang et al, 2022). Lastly, an LDM is used to reconstruct the mel-spectrogram of the stimulus audio with the fine-grained embedding, before it is converted to the waveform using a pretrained HiFiGAN (Kong et al, 2020) vocoder.\n\nThe authors evaluate the performance of this framework over three different datasets encompassing three different classes of audio / reconstruction task (sound, music, and speech). The quality of the reconstructed audio is assessed by FD, FAD, KL, and CLAP score and the mel-spectrograms are evaluated using PCC and PSNR. Performance is benchmarked against direct decoding approaches and fine-grained only approaches. The authors also offer an investigation into the quality of the semantic information captured through these approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are a few places where choices are made without explanation and parts of the discussion on the semantic analysis are unclear. For example, in the discussion of the semantic analysis it is stated that there is less semantic richness in the brain-to-speech case because listeners might be more focused on content. However, this effect does not necessarily seem to hold for the fine-grained only decoding (which is not addressed). It seems more likely that coarse-grained decoding may just be ill-suited to capturing semantic quality from speech as the signal clearly exists given the performance of the solely fine-grained approach. This represents a potential limitation of this framework for brain-to-speech." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024reverse,\ntitle={Reverse the auditory processing pathway: Coarse-to-fine audio reconstruction from f{MRI}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3JoLo0mmHH},\nnote={under review}\n}" }, "abstract": { "value": "Drawing inspiration from the hierarchical processing of the human auditory system, which transforms sound from low-level acoustic features to high-level semantic understanding, we introduce a novel coarse-to-fine audio reconstruction method. Leveraging non-invasive functional Magnetic Resonance Imaging (fMRI) data, our approach mimics the inverse pathway of auditory processing. Initially, we utilize CLAP to decode fMRI data coarsely into a low-dimensional semantic space, followed by a fine-grained decoding into the high-dimensional AudioMAE latent space guided by semantic features. These fine-grained neural features serve as conditions for audio reconstruction through a Latent Diffusion Model (LDM). Validation on three public fMRI datasets—Brain2Sound, Brain2Music, and Brain2Speech—underscores the superiority of our coarse-to-fine decoding method over stand-alone fine-grained approaches, showcasing state-of-the-art performance in metrics like FD, FAD, and KL. Moreover, by employing semantic prompts during decoding, we enhance the quality of reconstructed audio when semantic features are suboptimal. The demonstrated versatility of our model across diverse stimuli highlights its potential as a universal brain-to-audio framework. This research contributes to the comprehension of the human auditory system, pushing boundaries in neural decoding and audio reconstruction methodologies." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Brain-to-audio reconstruction", "Coarse-to-fine", "fMRI", "Auditory processing pathway" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/296b37669aec07b55233dd1c1ade8dbb4a82ac73.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a1016a7e03eab2baf41d0ed4b5a1cd1f419271ad.zip" }, "title": { "value": "Reverse the auditory processing pathway: Coarse-to-fine audio reconstruction from fMRI" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3JsU5QXNru
Group Distributionally Robust Dataset Distillation with Risk Minimization
main
Active
dataset distillation;distributional robustness;generalization
optimization
5;5;6;6
3;2;4;4
3;2;3;3
3;2;3;3
3;2;3;3
5.5
3.25
2.75
2.75
2.75
0.904534
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In Line 017, what does \"targeting the training dataset\" mean?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "There are both theoretical and experimental demonstrations of the effectiveness of the algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an algorithm for dataset distillation by incorporating distributionally robust optimization into it. There is theoretical justification and empirical validation of the proposed algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There seems to be a mismatch between the motivation and the experiments. The motivation emphasizes regions with low population density, which usually correspond to the worst-group performance in subpopulation shift [1]. However, the main experiments related to distribution shift are conducted on test sets with perturbations or the worst group induced by an additional clustering process. It would be better to conduct more experiments on subpopulation datasets included in [1]. \n2. The introduction has too many paragraphs, which makes the logic of the introduction tedious with poor readability. \n\nSome minor issues:\n\n- In Line 049, \"some technique\". \n- In Line 129 and 131, \"Algorithm\" \"Numerical Results\" their first letters do not need to be capitalized. \n- In the last paragraph of introduction, Section 3 is not mentioned.\n\n[1] Yang, Yuzhe, et al. \"Change is Hard: A Closer Look at Subpopulation Shift.\" *International Conference on Machine Learning*. PMLR, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the detailed suggestions provided in the weaknesses section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper proposes applying distributional robust optimization to dataset distillation, providing a reasonable approach to enhance generalization.\n\n2. Drawing on distributional robust optimization theory, this work establishes a theoretical foundation to support the proposed approach to dataset distillation.\n\n3. The paper is well-structured, with clear algorithm block and effective visualizations that enhance the presentation of the work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work proposes a robust dataset distillation approach that incorporates distributional robust optimization (DRO) to enhance generalization and performance across subgroups. This method combines clustering with risk-minimized loss to conduct dataset distillation. By prioritizing representativeness and coverage over training error guarantees, the approach enhances the models trained on synthetic datasets for real-world scenarios. Both theoretical analysis and empirical validation on multiple standard benchmarks are provided, demonstrating the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The empirical experiments focus primarily on CIFAR-10, ImageNet-10. Extending the evaluation to larger, real-world datasets with a greater number of classes would better demonstrate the effectiveness of proposed approach in generalization and robustness under real-world conditions.\n\n2. Comparing the proposed approach with additional baseline method addressing generalization and robustness would provide a more comprehensive assessment of the proposed approach, including comparisons with baseline methods such as [1, 2, 3].\n\n3. It would be valuable to visualize the robust inference tasks using real-world data rather than illustrative visuals, which could provide more insight and observations in real-world scenarios.\n\nReference:\n\n[1] Domain generalization with adversarial feature learning. CVPR 2018.\n\n[2] Self-challenging Improves Cross-Domain Generalization. ECCV 2020.\n\n[3] HYPO: Hyperspherical Out-of-Distribution Generalization. ICLR 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I have stated many of my suggestions and concerns in the weaknesses section. Below, I have further questions with some minor issues.\n\n* While theoretical analyses have been provided, I wonder whether the proposed method would affect the convergence rate and make it more difficult to find an optimal solution for the proposed objective. I would appreciate an empirical time complexity analysis regarding the proposed method.\n\n* In Eq.10, what does the *N* represent? I did not see any introduction of the *N*." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* This paper considers the group coverage and generalization ability of the synthetic dataset in Data Distillation(DD), which is interesting and novel.\n\n* The introduced algorithm is clear and the theoretical analysis seems solid.\n\n* The numerical results do show the effectiveness of the proposed algorithm. Meanwhile, the figures (Fig.3, 4) seem to indicate that the proposed method improves group coverage." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new data distillation approach emphasizing the subgroup coverage and generalization ability of the models trained on the synthetic dataset. This paper provides a theoretical analysis rooted in Distributionally Robust Optimization (DRO) and verifies the effectiveness of the proposed method with various experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* While the paper emphasizes group coverage and generalization in data distillation, the experiments are mainly conducted on IID datasets. More experimental results in scenarios with distribution shift between training and testing sets (such as long-tail classification, subpopulation shift, domain adaptation, domain generalization, etc) can further validate the improvement in group coverage and generalization ability,\n\n* According to Algorithm 1, the initialization of the synthetic dataset seems very important because it involves how the training data samples are clustered into subgroups. It may require further ablation study to verify the stability of the proposed algorithm.\n\n* The proposed method seems like a plug-in as it only modifies the objective for data distillation and could be combined with any data distillation methods. A more sophisticated comparison between the proposed method with other objectives regarding training gradients, feature distributions, and training trajectory could help readers better understand the improvement of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- As far as I know, this paper firstly addresses a major limitation in standard dataset distillation (underrepresented or rare subgroups) by applying a double-layer distributionally robust optimization (DRO) framework. This approach ensures the synthetic dataset better represents the full diversity of the data, reducing performance drops across different data subgroups. \n\n- This paper provides not only empirical evidence of the proposed method's robustness against domain shifts such as noise, blur, and inversion, but also a theoretical analysis of the effectiveness of CVaR in the loss function to enhance robustness.\n\n- The proposed method is designed to be adaptable and can integrate with various existing distillation techniques, such as gradient or distribution matching. This modularity makes it compatible with a range of distillation methods, which are still actively developing and evolving." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a robust dataset distillation method that enhances generalization across underrepresented subgroups by using a double-layer distributionally robust optimization (DRO) approach. Traditional dataset distillation often compresses training data into synthetic sets by matching training characteristics but may struggle with subgroup generalization. To improve this, the authors cluster the training data around synthetic data points and apply a Conditional Value at Risk (CVaR) loss to minimize worst-case subgroup errors, making the synthetic dataset more representative of diverse data distributions. Experimental results show that this method significantly improves robustness under domain shifts, outperforming baseline methods on benchmarks like CIFAR-10 and ImageNet-10, particularly in challenging conditions like noise and blurring." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed DRO approach, particularly with clustering and CVaR, may introduce significant computational overhead. This added complexity could be amplified for large-scale datasets due to the clustering of training (real) data. However, the paper does not discuss the computational overhead of the proposed method.\n\n- Although the method shows promising results on benchmarks like CIFAR-10 and ImageNet-10, the experiments are limited to controlled domain shifts (e.g., noise, blur). Testing under more realistic settings, such as in transfer learning, would further validate its robustness and practical relevance. For example, one could (1) train neural networks on a synthetic dataset distilled from a coarse-grained dataset (e.g., ImageNet) and (2) fine-tune and evaluate them on a fine-grained dataset (e.g., Birds 200). This setup would better illustrate the method's effectiveness in addressing the challenges posed by rare subgroups." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024group,\ntitle={Group Distributionally Robust Dataset Distillation with Risk Minimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3JsU5QXNru},\nnote={under review}\n}" }, "abstract": { "value": "Dataset distillation (DD) has emerged as a widely adopted technique for crafting a synthetic dataset that captures the essential information of a training dataset, facilitating the training of accurate neural models. Its applications span various domains, including transfer learning, federated learning, and neural architecture search. The most popular methods for constructing the synthetic data rely on matching the convergence properties of training the model with the synthetic dataset and the training dataset. However, targeting the training dataset must be thought of as auxiliary in the same sense that the training set is an approximate substitute for the population distribution, and the latter is the data of interest. Yet despite its popularity, an aspect that remains unexplored is the relationship of DD to its generalization, particularly across uncommon subgroups. That is, how can we ensure that a model trained on the synthetic dataset performs well when faced with samples from regions with low population density? Here, the representativeness and coverage of the dataset become salient over the guaranteed training error at inference. Drawing inspiration from distributionally robust optimization, we introduce an algorithm that combines clustering with the minimization of a risk measure on the loss to conduct DD. We provide a theoretical rationale for our approach and demonstrate its effective generalization and robustness across subgroups through numerical experiments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "dataset distillation", "distributional robustness", "generalization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/35a7cae7e063b234ddfba466ed26f98e53d5ac8a.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3be5099d9326f830a4acdbe829f8de20f4fd4f94.zip" }, "title": { "value": "Group Distributionally Robust Dataset Distillation with Risk Minimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3KEwJGYNzH
Automatic Truncation Position Selection in Singular Value Decomposition for Large Language Models
main
Active
Model decomposition; Large Language Model; Optimization
foundation or frontier models, including LLMs
3;3;6
4;3;4
1;2;3
2;2;4
1;1;4
4
3.666667
2
2.666667
2
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. The 0-1 Knapsack Problem is known to be NP-hard primarily because its capacity can be as large as $2^n$. However, in this problem, the capacity is limited. Could this constraint make a brute-force search feasible?\n2. Quantization-based model compression techniques, such as W4A16, can reduce model size to 25%. If singular value decomposition (SVD) is combined with quantization, could this approach yield further compression? Given that different decomposition methods offer varying levels of precision, is it possible to identify an SVD approach with high tolerance to quantization?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Automated and Importance-Aware Truncation Selection**: AutoTrunc automates the selection of optimal truncation positions using a learning-based approach to model layer importance, focusing on layers critical to performance. This approach streamlines the compression process, maximizing compression efficiency while maintaining accuracy.\n2. **Theoretical Soundness**: Built on rigorous NP-hard problem analysis and efficient budget allocation strategies, AutoTrunc is a theoretically grounded method, ensuring reliable performance estimates and compression quality across applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents AutoTrunc, an automated framework for selecting optimal truncation positions in singular value decomposition (SVD) to compress large language models (LLMs) efficiently. Unlike previous methods that overlook layer importance, AutoTrunc uses a learning-based approach to model each layer’s contribution to overall performance, optimizing truncation to maximize compression while preserving model accuracy. By addressing the truncation selection problem as 0-1 Knapsack Problem with efficient algorithms and dynamically allocating memory based on layer sensitivity, AutoTrunc achieves superior compression results. Experimental evaluations show that AutoTrunc outperforms existing SVD-based methods, reducing perplexity by up to 38.63% on LLaMA-2-13B at a 50% compression ratio, enabling more efficient LLM deployment without retraining." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Model Diversity**: The experiments focus solely on the Llama-2 model family, which uses multi-head attention. However, recent open-source LLMs, such as the Llama-3 and Qwen-2/2.5 families, have adopted Group-Query Attention, which might lead to different outcomes in weight compression. This lack of diversity in model structures limits the generalizability of the findings.\n2. **Limited Throughput Improvement**: AutoTrunc achieves only modest gains in inference throughput (approximately 1.1x from 0% to 60% compression), whereas other methods, such as SVD-LLM, achieve over 2x speedup at similar compression levels. This limited throughput improvement may reduce AutoTrunc’s impact in applications where throughput is a critical factor.\n3. **Lack of Comparison with Quantization Techniques**: The paper does not thoroughly compare AutoTrunc’s performance against other popular compression methods, such as quantization (AWQ, GGUF, GPTQ). Without these comparisons, it is challenging to assess AutoTrunc’s effectiveness, especially in contexts where quantization might offer a better trade-off between compression and performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. This paper addresses a good research topic: SVD for LLM compression.SS\n \n2. The paper is well-organized." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an automatic way to search for the optimal truncation position when compressing large language models with SVD. The authors first empirically show that the truncation position is highly correlated to the final performance. Based on the observation, they modeled the layer importance based on the correlation and design a way to obtain the optimal configuration. Experimental results demonstrate the effectiveness of the designed searching strategy for the truncation position." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Poor Presentation**  Many aspects starting from Section 2.2 to Section 3.2 are not clarified clearly. Specifically,\n \n 1. The data collected in Figure 1 is also used for modeling the correlation, but why the method only needs to collect 6x40=240 data? why only need to measure compression ratio ranging from 10% to 50%?\n \n 2. Why the author collects the modeling data by only applying the uniform compression ratio and greedy search?\n \n 3. The computed upper-bound is also confusing. On the one hand, it is correlated to the manual configuration Fmin, meaning that setting a larger Fmin could increase the performance? On the other hand, it is correlated to the learned modeling parameter, indicating that changing the pre-defined modeling function from linear one to a more precise unlinear one could also impact the upper-bound. Therefore, it is hard to tell whether reaching this upper-bound is truly the optimal solution.\n \n 4. Since the both the empirical observation in Section 2.2 and the modeling in Section 3.1 are highly data-dependent. What if we change the data distribution for this empirical analysis and modeling?\n \n2. **Overclaim on Compression Speed:** The author claims that the search process of the recent work, ASVD, is slow, however, I found that the designed method still needs to measure the end-to-end perplexity under different compression ratios, which is similar to what has been done in ASVD. Additionally, the proposed method runs learning-based algorithm to model the correlation between truncation position and corresponding perplexity, which is also time-consuming. Given these two situation, it is hard to claim that the proposed automatic searching algorithm is more efficient than prior works.\n \n3. **Missing Experiments:**\n \n 1. Pruning-based compression methods are different from the SVD-based ones, and their compression ratios are not exactly equal to the ratio of parameter reduction in LLM. Therefore, it is not fair to compare these two types of methods under the same compression ratio.\n \n 2. Lack of experimental comparison on generation tasks.\n \n 3. Lack of comparison on quantization-based compression methods.\n \n 4. Lack of analysis on running the methods using data with different distribution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Following are a few questions that can benefit the paper \n- Is it possible to quantify both with experiments and asymptotically the extra effort required to derive layer-wise ranks? This can help demonstrate the gains of the method relative to the SVD-LLM.\n- What was the observation when the lower-bound was not applied and yet the AutoTrunc is allowed to automatically configure the layer-wise low-ranks for decomposition? It is understandable the performance might be taking a hit significantly, probably in many cases, worse than SVD-LLM, but it is worth showing that as a baseline to demonstrate the efficacy the lower-bound. This maybe especially useful to highlight the fact that ideal low-rank decomposition may not necessarily be good all the time and hence having a constrained (with the lower-bounds on each layer) adaptive/automatic truncation can be justified even better. \n- In the similar spirit, is it a better strategy to set an overall compression ratio as a single hyper-parameter as opposed to set a low-bound on each layer and let the algorithm decide the truncation of the singular values in each layer? This in itself might be a new direction can be time-consuming for this review, but certainly a potential future direction and a new method. \n- The proposed method is applied only on LLaMa-2-7/13/70B models, why not apply on other family of LLMs such as Mistral/Phi-3/X/Y/Z even if it is in the realm of 7B or lower parameter models? The questions here are, i) the learned `\\alpha(s)` or even the strategy of AutoTrunc can be transferred to other family of models, ii) If the answer is yes, what is common/different in all these models that is actually contributing to the improved performances or a layer-wise study can be better explained, iii) if the answer is no, then can this be transferred easily to new family of LLMs or do we have to repeat the whole method from scratch for a new LLM architecture?\n- The performance of the model drops >=50% compression ratio when compared to SVD-LLM, any hunches as to why?\n- Is it possible to compare and contrast different SOTA methods discussed in this paper, in terms of latency and memory efficiencies at different compression ratios?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written and easy to follow along.\n- The contributions in the paper are nicely positioned w.r.t the state-of-the-art in the literature.\n- The strongest point(s) of the paper are i) applying SVD-LLM on each layer of the LLM in an adaptive manner ii) learning the layer importance, applying the lower bounds on the compression ratios at each layer (of course this is different from the over-all compression of the entire LLM), LambdaRank for listwise ranking iii) empirically quantification on sub-layers the correlation between performance and the model quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the critical problem of compressing large language models using SVD based decomposition. The significant contribution of the paper is a theoretical backing with empirical evidence to automatically truncate the singular values/vectors of each layer instead of applying a uniform low-rank on each layer like in the SVD-LLM method case. The paper has a learnable strategy that learns to decompose a given layer in an LLM using layer importance modeling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Follow the questions section" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024automatic,\ntitle={Automatic Truncation Position Selection in Singular Value Decomposition for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3KEwJGYNzH},\nnote={under review}\n}" }, "abstract": { "value": "Model decomposition in large language models has drawn much attention due to its superiority and good interpretability, where activation-aware singular value decomposition (SVD) can achieve competitive performance by mitigating reconstruction errors brought by outliers in activation. However, the performance of the state-of-the-art SVD-based LLM compression method is limited to the selection of truncation positions. No work meticulously examines the details of this problem theoretically and empirically tests its correlation with model performance. To fill the research gap, we propose an efficient method that can automatically select truncation positions, namely AutoTrunc. In our work, we first analyze the correlation between truncation positions and the model performance. Then, the model layer importance is modeled based on the correlation, followed by mathematical proof to illustrate how to reach and obtain the optimal truncation position configuration for different layer types. Extensive experiments are carried out to verify our presumption and evaluate our proposed method. Our proposed AutoTrunc outperforms the state-of-the-art SVD-based LLM compression method, with perplexity scores dropping by 24.65% and 38.63% at the compression ratio of 50% in LLaMA-2-7B and LLaMA-2-13B, respectively. The code will be released upon acceptance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Model decomposition; Large Language Model; Optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8a50081b03350d69682ccd05f6638f5db4ced0fd.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Automatic Truncation Position Selection in Singular Value Decomposition for Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3LFR5N2uv8
Younger: The First Dataset for Artificial Intelligence-Generated Neural Network Architecture
main
Active
Artificial Intelligence-Generated Neural Network Architecture;Neural Architecture Design;Graph Neural Network;Benchmark;Dataset
datasets and benchmarks
3;5;6;6
5;4;3;3
2;3;3;3
1;2;3;3
1;3;3;3
5
3.75
2.75
2.25
2.5
-0.984732
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The study is highly relevant and timely for NAS-related fields, with the potential to generate a high impact on the community. \n\nThe effort invested in this work seems quite substantial.\n\nThe writing and the presentation are clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study produced a dataset of 7k unique models, Younger, from 174k publicly available models and 30 tasks. After processing and filtering, architectures are stored as acyclic graphs based on ONNX definitions. The study aims to automate architecture generation and refinement. It offers a range of statistical analyses to illustrate the diversity of Younger. Several experiments are also included to show the potential of Younger, especially as a benchmark for GNN." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The goal of this study is quite ambitious, allowing people to search for good architectures using Younger for a wide range of tasks. The experimental section shall match that ambition, e.g. demonstrating the applicability and benefits of Younger on a wide range of tasks. This would help illustrate the practical application of the dataset.\n* Provide a concrete example, or case study showing how Younger could be used for a specific task like CIFAR-10 classification and ImageNet classification.\n* Other possible cases can be performing time series prediction\n* Or creating generative models.\n* Or multimodal models for a specific task e.g. text-to-video or description or caption generation task.\n\n---\n\nAlso the paper claims Younger is advantageous in comparison with benchmarks like DARTS, NAS-Bench-201. Other than the difference between them, as shown in Table 1, how would they compare in a specific task, e.g. using different sets for the same task? The study should show Younger's advantages clearly, e.g. leading to better performance, reducing search costs etc.\n* Conduct a comparative experiment using Younger and a benchmark like NAS-Bench-201 on a common task, measuring metrics like search efficiency and final model performance.\n\n---\n \nClearly state whether AIGNNA is a novel term introduced in this paper. If not, provide the appropriate citation.\n\n---\n\nThe code and the dataset link cannot be found in the submission.\n* Provide direct links to the code repository and dataset, perhaps in a dedicated \"Resources\" section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses. Overall, this paper is interesting and the proposed YOUNGER dataset seems to be useful in several directions. Giving all these, I’d like to recommend the score 6 temporarily." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1) Constructing the dataset for neural architectures in the AI-generated manner is novel and interesting. \n2) The paper is nicely written and well organized. The details for the dataset and construction processes are clearly presented. \n3) The AI-generated architectures in different types are very helpful to several research directions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel dataset for AI-generated neural network architectures. Specifically, the construction process contains four core steps, i.e., retrieving NN models, converting the models to ONNX format, extracting DAGs from the ONNX models, and filtering out isomorphic DAGs to ensure the uniqueness of the architectures. Some experimental results support the statistical analysis and present some distributions of the whole dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) More details in terms of the use of the proposed YOUNGER dataset are expected. For example, it is possible to provide at least one case using the YOUNGER dataset, such as performing some NAS algorithms to search for architectures in this dataset? \n2) The experimental results are mainly focus on GNNs. However, it seems that there can be other types of architectures contained by YOUNGER. Could the authors provide additional experimental results in terms of other types of architectures beyond GNNs?\n3) More details for the distribution of the performance of architectures are needed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "in the paper it is stated in line 284 that \"less than 1% of these models represent heterogeneous and effective architectures. This notably low proportion of heterogeneous architectures highlights the limitations of current neural network design methods, both manual and NAS-based, in fostering architectural innovation\". \n\nHow did the author decide whether an architecture is heterogeneous? and what does it mean by effective architecture? In addition, all architectures are from online sources should be mostly manually built right? It is really surprising that out of all models available online, less than 1% are really valid." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper presents a good contribution to the community by offering a dataset that can potentially faciliate the research in the neural architecture search field.\n\n2. The dataset published can overcome challenges of previous related datasets in terms of operator scope and scale." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a dataset for neural network architectures extracted from public model sources. Then convert these models to intermediate representation operators for further research purpose. The dataset incorporates richer operator than existing neural architecture datasets, to facilitate more research in the field." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One complaint I have regarding this dataset comparing with existing NASbench dataset is that the proposed dataset doesn't seem to have the associated training performance on a standardized dataset. In comparison, the NASbench dataset is evaluated on a standard task (Cifar10) on different settings. Because of this, user can only perform unsupervised analysis like shown in figure 3-6." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Section 4.3.2: different tasks have different evaluation metrics and ranges for different datasets [9], so how do you handle that?\nSection 4.3.3: Can you provide more information on exactly what this is, in terms of framing it through citations and potential tables/figures presenting the results? That would be a better use of page real estate than Figs 2-6 which are more appendix details.\n\nReferences:\n\n[1] Liu, Yuqiao, et al. \"Bridge the gap between architecture spaces via a cross-domain predictor.\" Advances in Neural Information Processing Systems 35 (2022): 13355-13366.\n\n[2] Mills, Keith G., et al. \"Gennape: Towards generalized neural architecture performance estimators.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 8. 2023.\n\n[3] Yang, Yichen, et al. \"Equality saturation for tensor graph superoptimization.\" Proceedings of Machine Learning and Systems 3 (2021): 255-268.\n\n[4] Zhang, Chenhao, et al. \"Towards better generalization for neural network-based sat solvers.\" Pacific-Asia Conference on Knowledge Discovery and Data Mining. Cham: Springer International Publishing, 2022.\n\n[5] Mills, Keith G., et al. \"Building Optimal Neural Architectures using Interpretable Knowledge.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[6] Schrodi, Simon, et al. \"Construction of hierarchical neural architecture search spaces based on context-free grammars.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[7] Salameh, Mohammad, et al. \"AutoGO: automated computation graph optimization for neural network evolution.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[8] White, Colin, et al. \"How powerful are performance predictors in neural architecture search?.\" Advances in Neural Information Processing Systems 34 (2021): 28454-28469.\n\n[9] Mills, Keith G., et al. \"Aio-p: Expanding neural performance predictors beyond image classification.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 8. 2023." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Strengths:\n- The dataset consists of many diverse architectures from different tasks.\n- Breaking the confines of predefined search spaces is a good step forward for NAS." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript proposes Younger, a dataset comprising of neural network ONNX graph representations collected from various online repositories. Each ONNX graph is annotated with some kind of task performance. The goal of Younger is to fuel Artificial Intelligence-Generated Neural Network Architecture (AIGNNA) generation, in order to break the confines of pre-established search spaces. The manuscript consists of tables and figures tabulating/illustrating features of the Younger dataset, as well as some limited experiments on AIGNNA Exploration." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses:\n- Primary weakness of this paper is that its key contribution is not novel at all. While it is true that predefined search spaces pose a large problem for NAS, there are existing works that break the confines of predefined search spaces, enabling generalizable prediction across different micro, cell-based NAS search spaces [1] and even between micro and macro search spaces [2], using ONNX as the generalizable representation [3, 4, 5] or otherwise. This paper does not seem to recognize such existing works.\n- AIGNNA in Section 4.3, the authors illustration is flawed for Fig. 7 left. You would not have a choice between 'Add' and 'ReLU' since ReLU is an activation applied to one single input, while 'Add' combines several inputs into one. A better choice would be to ask 'ReLU' or 'SiLU' or 'Add' vs. 'Concat'. Here still, there is existing work on generating new architectures outside of pre-existing search spaces [6, 7]. \n- Experimental results in Section 4.3.2 is not very convincing as you are focusing on testing different GNN designs (which govern the message passing rule, not necessarily graph feature design), while there are numerous neural predictors in the literature [8] which are better compared to. Also, ACC, F1, Prec. and Recall are generally not evaluation metrics for neural predictors; rather, rank correlation (Kendall's Tau or Spearman Rho) and regression error (L1) are more common [8]. \n- Overall presentation in the paper is very lacking. Table and Figure captions are too short and do not adequately convey enough information, while most figures (2 - 6) should be re-tweaked with larger fonts." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024younger,\ntitle={Younger: The First Dataset for Artificial Intelligence-Generated Neural Network Architecture},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3LFR5N2uv8},\nnote={under review}\n}" }, "abstract": { "value": "Designing and optimizing neural network architectures typically require extensive expertise, starting from handcrafted designs followed by manual or automated refinement, which significantly hinders rapid innovation. To address these challenges, Younger is introduced as a comprehensive dataset derived from over 174K real-world models across more than 30 tasks from various public model hubs. After extensive processing and filtering, Younger includes 7,629 unique architectures, each represented as a directed acyclic graph with detailed operator-level information based on ONNX operator definitions, enabling compatibility across different deep learning frameworks. The dataset is designed to support the emerging research area of Artificial Intelligence-Generated Neural Network Architecture (AIGNNA), which aims to automate their generation and refinement. Comprehensive statistical analysis, including architecture component analyses, highlights the diversity and complexity of architectures in Younger, revealing the potential for future research in this domain. Initial experiments, including operator and dataflow predictions, demonstrate the dataset's utility for architecture exploration and evaluation, and highlight its potential as a benchmark for graph neural networks. Furthermore, an online platform ensures continuous maintenance and expansion of the dataset, supporting global researchers in their endeavors. The dataset and source code are publicly available to encourage further research and lower entry barriers in this challenging domain." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Artificial Intelligence-Generated Neural Network Architecture", "Neural Architecture Design", "Graph Neural Network", "Benchmark", "Dataset" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e5114cbff2460c3340f5a44e414d247dd7599255.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Younger: The First Dataset for Artificial Intelligence-Generated Neural Network Architecture" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3LOcwfB4JX
General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model
main
Active
OCR;LVLM;Multimodal
applications to computer vision, audio, language, and other modalities
3;5;6;6
4;4;5;1
3;2;3;3
3;2;4;3
1;3;2;4
5
3.5
2.75
3
2.5
-0.272166
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Section 4.1 lists joint training and post-training for only 1 epoch. Usually multiple epochs are required to train a model. While post-training can be understood as vision encoder and much of language decoder may already be well-trained from prior stages, 1 epoch for joint training seems pretty small. Any reason why that worked? Is there a study on how more epochs affected the outcome? Is it possible that there isn't much data diversity between training and test set, and hence, 1 epoch is enough?\n\n2. What are the training/inference latency gains by using a smaller size model like GOT compared to Qwen-VL-MAX or others?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents a unified end-to-end model for a gamut of OCR documents, including sheet music, geometry and number-centric charts. It replaces the cascaded OCRs specialized in different document types.\n\nThe way the three stages of training are applied to unify a diverse set of OCR tasks (scene, document, chart, music sheets, etc.) within a single OCR is interesting. The task-oriented fine-tuning is limited to post-processing the language decoder. Freezing the vision encoder avoids increasing the computational demands and ensures foundational visual understanding is stable across the tasks. \n\nThe results are compared against the SOTA methods on a variety of metrics including F1-scores, edit distances, BLEU and METEOR values, and seem to outperform majority of the methods. For box-guided and color-guided OCR, specific comparison to Fox Lie et al. seems to outperform against all the metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript proposes a unified end-to-end 2.0 model for OCR, called GOT (General OCR Theory) using LVLMs (Large Vision Language Models). The architecture contains 80M parameters in the encoder, and 500M parameters in the decoder tackling long-contexts. Region-based recognition, dynamic resolutions and multi-page OCR are few other properties of GOT. It supports English and Chinese and can produce structured formats like markdown, tikz, smiles and kern. \n\nGOT has a 3-stage training process: pre-training the vision encoder, joint-training of encoder and decoder, and finally the post-training of the language decoder. The performance is compared against SOTA methods on various scores like edit distance, F1, BLEU and METEOR, and seems to out-perform against majority of the SOTA methods. The results on markdown, sheet music, geometry and number-centric charts are also presented." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weakness of the paper lies in its novelty. The 3-stage training process is well known in the literature. For example, many existing frameworks in OCR, vision-language and LVLMs decouple encoder pre-training from the rest of the pipeline. The vision encoders are usually pre-trained on a wide variety of data to create a foundational understanding of text and scene. The joint training of vision and language pieces is again known in models in UniT, BLIP and LVLMs. Lastly, the fine-tuning of the language decoder piece is again seen in T5 etc. Perhaps, the prime novelty is the application of these methods to an OCR problem, smarts about synthetic data generation and OCR-specific fine-tuning.\n\nThe other weakness of the paper is in its presentation. The paper is overall hard to follow, as it continues to mix, architecture, training, data and task-specific details all together, and does not lay out in separate sub-sections. E.g. the section 3.2.1 starts with the architecture, dives into input sizes, parameter sizes, goes through data peculiarities (natural scenes, cropped slices) and training process all in one paragraph. A lot of architecture diagrams can be added to aid the reading.\n\nLastly, in experiments, ablation studies are missing to underscore the importance of each of the stages, data types. Latency studies, comparisons with SOTA methods, and failure cases are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How does the performance of the proposed method fare on openly and widely used page-level datasets (such as CASIA-HWDB, HCCDoc, CTW1500, ReCTS, IAM, CROHME16/19, etc.)? Why was the effectiveness of the proposed method not tested on these commonly used datasets in the community?\n\n2. Are the test datasets used in sections 4.2.2, 4.2.3, and 4.2.5 open-source?\n\n3. In the references, proprietary acronyms should be capitalized, for example, CASIA, IAM, HDWB, BLIP, etc.\n\nAdditional comment: I do not agree that low training and inference costs must be a characteristic of OCR 2.0. As a new technology framework or paradigm for OCR in the era of AGI, it should also possess scalability capabilities." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper introduces a unified OCR-2.0 model, emphasizing an end-to-end architecture whichl is designed to handle various OCR tasks efficiently.\n\n- GOT demonstrates versatility by recognizing a wide range of artificial optical characters, including sheet music, geometric shapes, and charts. The model can adapt to different input styles and output formats, enhancing readability for formulas and tables.\n\n- This paper is well written and well organized.\n\n- The idea of OCR 2.0 is interesting and novel." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an so-called OCR-2.0 model named GOT, designed for advanced Optical Character Recognition tasks. It proposes a new OCR model, emphasizing end-to-end architecture, low training costs, and versatility in recognizing a wide range of artificial optical characters. The model, with 580M parameters, incorporates a high-compression encoder and a long-context decoder for handling various OCR tasks.\n\nGOT is evaluated on multiple OCR tasks, demonstrating superior performance in plain document OCR, scene text OCR, formatted document OCR, fine-grained OCR, and more general OCR tasks like sheet music, geometric shapes, and chart OCR." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The term \"general OCR theory\" is not appropriate as the paper does not present any rigorous theory. It is suggested to consider alternative terms such as General OCR Technology/Framework/Pipeline/Methodology.\n\n(2) Dataset construction is a significant contribution of this work. The authors utilized data engineering methods to create a substantial amount of non-public training data. If these datasets are not made publicly accessible, it will make it challenging for other researchers to perform fair comparisons under the same settings as this paper.\n\n(3). In section 4.2.2, the authors collected 400 natural scene text images for testing. Why did they not use publicly available datasets in this domain (such as CTW1500, ReCTS, etc.) to evaluate the performance of GOT on natural scene text? I am wondering if the proposed method on these public datasets can achieve state-of-the-art (SOTA) performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Show in the part of Weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This article presents a unified approach to OCR recognition tasks, making it one of the most comprehensive OCR models to date with sufficient tasks.\n2. The proposed GOT method employs a three-stage pre-training and fine-tuning process to achieve the experimental results outlined in the paper.\n3. The GOT method addresses various OCR recognition problems across multiple scenarios (natural scenes, documents, etc.), as well as different levels of granularity, such as document-level and region-level recognition.\n4. Multiple datasets are constructed to conduct these diverse settings of these OCR recognition tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the GOT model (General OCR Theory), to improve upon traditional OCR systems (OCR-1.0). With 580 million parameters, GOT processes various artificial optical signals and supports multiple output formats. It features interactive region-level recognition and incorporates dynamic resolution and multipage OCR, showing superior performance in experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing of this article needs further improvement, as several key details are missing. For example, when discussing the method, it is unclear how to distinguish between different tasks. Does it involve using a question as input to the decoder, similar to existing MLLMs?\n2. In the experiment, the paper does not conduct the comparisons on the benchmarks of OCRBench, InfoVQA, and DocVQA. Is this because the proposed method does not support QA? (You did not clarify how you distinguish between different tasks?)\n3. This paper mainly focuses on recognition issues related to OCR tasks and does not address detection problems. One possible reason could be that both the current encoder-decoder and decoder-only architectures struggle with coordinate regression prediction, which may have prevented you from tackling detection tasks.\n4. Additionally, there is a lack of comparison with methods like Kosmos?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Some more details would be necesary on how metrics are computed given the full recognized text and ground-truth . \n- Also some more details on how the OCR task on charts is defined" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Compared to other unified end-to-end frameworks for multi-task OCR based on multimodal LLMs, the proposed approach is efficient and the model is relatively smail. \n- The proposal of a new training strategy adding complexity increasingly to the model, either from the point of view of the model and the data used for training. \n- The generation of a large collection of data to train the model can be useful for advancing research in generic OCR (if the data is made public after publication)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper describes a single unified framework to perform end-to-end OCR in different kinds of images (documents, scene images, handwritten text, charts, music sheets, math formula). The framework relies on collecting a large amount of data for every type of image, partially from public data sources, partially automatically rendered. Then, a curriculum strategy is employed to train the model based on standard encoder and decoder architectures. In a first stage, only a limited number of OCR tasks with limited variability are used to train the encoder using a simple decoder and, progressively more data, tasks and the final decoder architecture are included in subsequent training stages. Experimental results compare the proposed approach with other generic models based on multimodal LLMs" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper lacks contextualization and comparison with previous SoA OCR methods not based on LLMs, specialized on each of the individual OCR tasks. Related work lacks a much better discusion and reference to all existing specific methods for text recognition in different tasks (scene text, documents, handwritten text, ...). In the experimental results I also miss comparison with specific OCR methods in each task, even in some tasks comparison with existing commerical OCR tools. \n- Following the previous comment, I think that the papser should also use common standard benchmarks and datasets in some specific OCR tasks. In the past years there has been a huge effort in the text recognition community to create standard benchmarks for evaluation, that are ignored in the paper. Using these common benchmarks (for all the tasks where this is possible) would help to get a better understanding of the contribution of the proposed approach in comparison with existing OCR techniques. \n- As far as I understand, most of the images used to train and evaluate the proposed approach are very clean images, collected from clean pdf documents or automatically rendered, without the kind of noise, distortion, low resolution problems, ... that can be encountered when dealing with real images. \n- I miss some analysis of the contribution of each of the training stages in the final performance of the model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024general,\ntitle={General {OCR} Theory: Towards {OCR}-2.0 via a Unified End-to-end Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3LOcwfB4JX},\nnote={under review}\n}" }, "abstract": { "value": "Traditional OCR systems (OCR-1.0) are increasingly unable to meet people's usage due to the growing demand for intelligent processing of man-made optical characters. In this paper, we collectively refer to all artificial optical signals (e.g., plain texts, math/molecular formulas, tables, charts, sheet music, and even geometric shapes) as \"characters\" and propose the General OCR Theory along with an excellent model, namely GOT, to promote the arrival of OCR-2.0. The GOT, with 580M parameters, is a unified, elegant, and end-to-end model, consisting of a high-compression encoder and a long-contexts decoder. As an OCR-2.0 model, GOT can handle all the above \"characters\" under various OCR tasks. On the input side, the model supports commonly used scene- and document-style images in slice and whole-page styles. On the output side, GOT can generate plain or formatted results (markdown/tikz/smiles/kern) via an easy prompt. Besides, the model enjoys interactive OCR features, i.e., region-level recognition guided by coordinates or colors. Furthermore, we also adapt dynamic resolution and multi-page OCR technologies to GOT for better practicality. In experiments, we provide sufficient results to prove the superiority of our model." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "OCR", "LVLM", "Multimodal" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0db133a9ee6a504fb4672927d1f04a368b4c5580.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a3be147f4e6febd6dceb8256d6b2dc7f703d89c5.zip" }, "title": { "value": "General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3LifGYAD0W
Eliciting Black-Box Representations from LLMs through Self-Queries
main
Active
LLMs;representations
foundation or frontier models, including LLMs
3;5;5;8
3;3;3;3
2;2;3;4
2;2;3;3
2;3;3;3
5.25
3
2.75
2.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The same as in the Weaknesses part." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- A simple yet effective method to elicit the internal representations of black-box models\n- A series of detailed experiments demonstrating QueRE's effectiveness across various benchmarks and settings, comparing it favorably against more complex, resource-intensive methods.\n- Strong practical application value.\n- Mathematical foundation" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an effective method called QueRE, designed to infer the internal representations of black-box language models through self-queries, particularly in follow-up question scenarios. The authors represent the black-box model by using the probability of the \"yes\" token as vector values, and then train a linear model to achieve the following objectives: 1) accurately predict model performance at the example level, and 2) assess various states of the language model, such as detecting if the model has been influenced by harmful prompts, determining if it has been replaced by another model, and identifying the architecture and size of the models. The authors also demonstrate that this approach generalizes beyond top-K token reliance, making it applicable to sample-based methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am impressed with this paper, both by its strong results and its practical applications. Could you elaborate on the intent behind your design choices? Additionally, did you explore other methods that ultimately proved less effective or failed to yield similar results?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- While your method is superior to the baselines, can you comment on how much more expensive/cheaper it is?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well written, and the ideas are conveyed in a way that is accessible and logically coherent, making the methodology and results easy to follow.\n2. This work is particularly significant given the increasing reliance on black-box LLMs by researchers and developers who lack full access to model internals. By providing a practical and scalable way to elicit representations from these models, the paper addresses a growing need.\n3. The method of querying black-box models with elicitation question to construct representations is creative and the method outperformed strong baselines. The method was also applied to three different tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method to extract black-box representations from large language models (LLMs) by querying them with elicitation questions. The approach leverages the probabilities of model responses to these questions, creating low-dimensional representations that can be used to predict model performance on specific tasks, detect adversarial manipulation, and distinguish between different model architectures and sizes. The authors demonstrate that these black-box representations are effective, often outperforming other techniques that even rely on internal model states." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper labels the extracted low-dimensional vectors as “representations,” but this may be overstated. These vectors are simply derived from yes/no responses to elicitation questions, which only provide limited insights into the model’s deeper knowledge or reasoning structures.\n2. My interpretation is that the classifier is primarily learning to detect shifts in the model’s calibration—the confidence in its yes/no responses—rather than meaningful behavioral changes. This is limiting since if a model provider adds the system prompt (“Be helpful and cautious.”), it could alter the model's calibration and trigger your classifier as detecting an adversarial/harmful LLM (task 3). I think any added system prompt for that matter would trigger a false positive. Furthermore, if system prompts were appended to all models by the model providers, I'm not sure you could still reliably classify between models (task 2)?\n\nI’m not sure how useful the proposed method is for actual tasks beyond predicting model performance (point 1), and I'm not sure if the method is actually robust to the variations I described (point 2)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Questions:\n\n- I’m unsure about the importance of distinguishing between model architectures. More related work and citations pointing out the importance can be stated to highlight the importance.\n- Suggestion for clarity: Prompts for black-box baseline — pre-conf and post-conf should be provided in the appendix to make it easier to understand what they are." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The authors show significant results for the task to predict whether the model answers an open-ended question correctly. They compare against a variety of white-box and black-box methods and beat these baselines. The presentation for this section was clear.\n- The scaling chart (Figure 8) demonstrates performance improvement with more elicitation questions. This work can be viewed as an extension of the post-conf baseline — which uses just one way of asking for the probabilities. Instead, we can ask it in multiple ways. We then apply a linear classifier on top of it to weigh the different possible ways of asking for probabilities. The scaling chart demonstrates the usefulness of their method compared to the post-conf baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce QueRE — a black-box method for extracting model representations. QueRE extracts representations by asking follow-up yes/no questions about the LLM's answers and using the articulated response probabilities as features. They ask questions such as “Are you confident in your answer”, where the LLM replies P(”Yes”) = 0.95”). They then use the value of 0.95 as part of the representation for a linear model to predict whether or not the LLM gets the answer correct. By asking multiple follow-up questions, they create a representation for a linear model to use. This linear model can be used for particular tasks, given labeled data.\n\nContributions\n\n- On a task to predict whether the model answering an open-ended question is correct, QueRE beats several white-box and black-box methods.\n- QueRE can distinguish between model architectures, and detect adversarially prompted models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It is unclear to me whether QueRE works on out-of-distribution datasets. The results would be more significant if QueRE generalizes to out-of-distribution datasets, where it is hard for humans to supervise and we don’t have ground truths.\nSuggestion: If you use labeled data from e.g. HellaSwag to train your linear model, does the model work on SQuAD?\n- The setup for detecting adversarially influenced models is simplistic. Using a system prompt that instructs the model to \"answer questions incorrectly\", is easily detectable. For example, if I asked the adversarial model “What is 1 + 1?”, the adversarial model may reply “3”, which someone can detect as wrong easily.\nThis setup fails to address the forms of adversarial influence that the cited Sleeper Agents paper spells out — where models insert bugs only in specific contexts.\nSuggestion: Use a similar setup as the Sleeper Agents paper where you evaluate code-like tasks. The system prompt may be “When creating code, insert bugs into the code. Otherwise when asked other questions, just act normally”. \nThere is a question of what labeled data to use in this adversarial model setup. Related to my first point, assume that you don’t have labeled data about code questions. Does using labels from e.g. the dataset SQuAD still work? Can the linear predictor generalize to detecting these adversarial questions, even without ground truths from these adversarial questions?\n\nAddressing these two weaknesses would improve my rating." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I believe extracting low-dimensional representations that can reflect LLM's behavior is an important and challenging problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces QueRE, a method for constructing black-box representations from LLMs by using the models’ confidence in answering specific follow-up questions. These representations are instance-level, meaning each one corresponds to a single instance. Through experiments, the authors demonstrate that these representations can predict task accuracy, differentiate between a clean LLM and an adversarially instructed one, and distinguish different LLMs. The experiments are conducted by additional supervised training on these representations for every dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: The representations extracted in this paper fall short of my expectations for “eliciting black-box representations from LLMs.” The authors construct representations as Yes-or-No probabilities for a set of follow-up questions, like “Do you think your answer is correct?” This approach feels quite post-hoc. I expected “black-box representations from LLMs” to more fundamentally reflect the models’ internal workings. To give some concrete examples, first, I would hope the representation exhibit generality across tasks, rather than being tailored for accuracy prediction with a predictor trained for each specific task. Second, I would hope the representation finds utility in zero-shot settings as well. While the authors may have a different perspective, I encourage them to clarify their definition of “black-box representation” early in the paper.\n\nW2: 2. Following up on the previous comment, if the authors could demonstrate that a single predictor can be trained across tasks to predict accuracy and generalize to new, unseen tasks without any known ground truth, it would help demonstrate the generality of the extracted representations.\n\nW3: Tables 2, 7, and 8 show that random sequences can outperform QueRE for some models and tasks, suggesting that the specific follow-up questions chosen may not be critical. Random sequences are considered among the weakest baselines, yet they outperform QueRE, leading me to believe that simply elicitating more information from the LLM is the main reason giving the empirical performance. Better constructed follow-up questions can perform better, but a “random kitchen sink” approach already works to some level. If this is the case, it should be acknowledged in the paper, as it, in my view, lessens the contribution of this work. Could the authors conduct a more thorough analysis of why random sequences sometimes outperform QueRE, and discuss the implications this has for their method?\n\nW4: Building on the idea of random sequences, I suggest the authors compare QueRE with more diverse follow-up question strategies. For example: (1) rephrasing the same question in different ways and eliciting confidence, or (2) using questions from the same task with known ground truth (since all experiments assume access to a small amount of labeled data from downstream tasks), or (3) using questions from other tasks with known ground truth.\n\nW5: Please discuss empirically the importance between the number of eliciting questions and the elicitating question strategies (e.g., between one designed in this paper, random sequences, others mentioned in W4).\n\nW6: I’m curious why QueRE outperforms full logits in Figures 2 and 3. Shouldn’t full logits contain strictly more information than QueRE? Could you provide a more detailed analysis of why QueRE outperforms full logits in these cases?\n\nW7: If this difference is due to the fact that full logits are only extracted for the initial response while QueRE includes additional follow-up questions, could the authors concatenate full logits from all follow-up questions and include this as a comparison?\n\nW8: Additionally, I suspect the use of linear classifiers might contribute to these results, since QueRE is low-dimension and full logits are high dimension. Could the authors compare QueRE’s performance with more complex classifiers? Linear classifiers may favor low-dimensional representations.\n\nW9: QueRE appears to extract and combine uncertainty or confidence signals. Could the authors compare QueRE to a baseline that directly concatenate different existing LLM uncertainty estimates? For example, [1].\n\nW10: If I understand correctly, the experiments in Sections 4.2 and 4.3 are ablation studies, rather than comparisons to external methods. All baselines—pre-conf, post-conf, and Answer Probs—are components of QueRE, as noted in lines 297-307. This setup makes it impossible for QueRE to perform worse than these baselines. I suggest that the authors highlight the ablation nature of these experiments in Sections 4.2 and 4.3. Additionally, it would be beneficial to compare QueRE with existing methods that can be applied.\n\nW11: The experimental setup in Section 4.3 may be seen as limited and less realistic. Adversarial LLMs explicitly designed to answer incorrectly are easy to distinguish and less reflective of real-world scenarios. I suggest the authors consider existing methods that contaminate or manipulate LLMs in more subtle ways.\n\n[1] \"Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs.\"" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a technique to extract representations from black-box language models through querying about its behavior; these representations are useful for predicting performance or if a model has been adversarially influenced." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024eliciting,\ntitle={Eliciting Black-Box Representations from {LLM}s through Self-Queries},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3LifGYAD0W},\nnote={under review}\n}" }, "abstract": { "value": "As large language models (LLMs) are increasingly relied on in AI systems, predicting when they make mistakes is crucial. While a great deal of work in the field uses internal representations to interpret model behavior, these representations are inaccessible when given solely black-box access through an API. In this paper, we extract representations of LLMs in a black-box manner by asking simple elicitation questions and using the probabilities of different responses \\emph{as} the representation itself. These representations can, in turn, be used to produce reliable predictors of model behavior. We demonstrate that training a linear model on these low-dimensional representations produces reliable and generalizable predictors of model performance at the instance level (e.g., if a particular generation correctly answers a question). Remarkably, these can often outperform white-box linear predictors that operate over a model’s hidden state or the full distribution over its vocabulary. In addition, we demonstrate that these extracted representations can be used to evaluate more nuanced aspects of a language model's state. For instance, they can be used to distinguish between GPT-3.5 and a version of GPT-3.5 affected by an adversarial system prompt that makes its answers often incorrect. Furthermore, these representations can reliably distinguish between different model architectures and sizes, enabling the detection of misrepresented models provided through an API (e.g., identifying if GPT-3.5 is supplied instead of GPT-4)." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLMs", "representations" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/abc81c3a8bc98e07dac2f02620859bdf9f4fd8c8.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c83ab5f1bd2d340131b8ffb61213da9364b37fd0.zip" }, "title": { "value": "Eliciting Black-Box Representations from LLMs through Self-Queries" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3LnTTHDWER
CLEAR: Understanding the Reasoning Capabilities of Large Language Models
main
Active
LLMs;dataset;benchmark;translation;in-context-learning;few-shot
datasets and benchmarks
3;3;3;6
3;3;4;3
2;2;1;3
2;2;1;3
2;2;2;2
3.75
3.25
2
2
2
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethical concerns" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Can the authors list the advantages of using your benchmark instead of existing, human labelled or synthetic datasets for reasoning? In other words, is your benchmark a proxy of some high-order reasoning capabilities? What are the advantages of your approach over standard tests (e.g., compositionality tests that one can generalise to prevent memorisation and data leakage?).\n\n2) Why didn’t the authors show that your results correlate (or do not) with those on popular benchmarks in reasoning and/or simple translation?\n\n3) Say that one of those questions you use to create your benchmark is present in the training set of an LLM. How does that affect the performance of your translation? How do you ensure that a model is not interpolating the answer they potentially have to make the translation easier?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The idea of using translation to test a model’s capability is interesting, especially considering that such a dataset can be scaled automatically in size and breadth. The experiments with their dataset are comprehensive and cover many sota models and standard translation metrics. Despite rushed, the article is easy to read and the ideas expressed are clear enough." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a new task to test the reasoning capabilities of LLMs based on translation. The idea is to take standardised tests, define the rules of a new language, and ask an LLM to translate an example expressed in the new language and aided by a sufficient set of rules. By construction, the translations are not part of the training data and require simple symbol manipulation (e.g., logical operations). The authors test their benchmark on different LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have one big concern for this article.\nThe lack of details on what the benchmark is testing caused me some issues in understanding what capabilities it is testing. While the authors describe the task and provide a few examples, its small size (140 samples) and the lack of a detailed analysis of what kind of capabilities are required by a model to solve it (the authors mention multiple time logics and, I reckon, compositionality) make it very hard to draw comparisons with existing datasets. Furthermore, the authors do not run concurrent experiments on similar tasks (e.g., a baseline), to show correlation with existing benchmarks. In other words, is this benchmark telling us something about other popular LLMs’ benchmarks? If that’s the case, one can argue that your dataset is a proxy for some high-order reasoning and use your dataset (for example, because one can synthetically create new instances easily and does not suffer from memorisation) instead of one created by human experts.\n\nThe article seems rushed (see the paragraph on related work or the methodology, paragraph “Closed Frontier Models”). \nFurther, in lines 269-270, it’s your duty to run the experiments in time and before the deadline; we cannot discount the fact that GPT-4o-mini has a longer response rate, especially considering that your benchmark is very small. It’s better to present organic experiments on all the models or exclude them from the evaluation rather than preliminary results that may not be statistically significant or affected by sample bias.\n\nFigure 1 is confusing and difficult to interpret after reading the first section. Even after reading the methodology, I still do not fully understand what it represents.\n\nRelated works are rushed, with a few references missing (line 97) or inconsistent formatting (see 107-108 vs. 122-123). Some sentences are not grammatically consistent (lines 112-13), and others express vague concepts (line 125)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Is it possible to design a CON->CON task? \nAs you mentioned in Sec. 5.3, the knowledge of English pattern will affect the overall performance. \nSince you are trying to use new information to evaluate, CON->CON is a natural thought to eliminate the influence of the English pattern.\n\n2. Can you add the analysis of the impact of the tokenization? You also mentioned this limitation in the paper, but I assume this is an important factor to prove that CLEAR is truly evaluating the reasoning ability. I suggest creating a token-based translation that every token is not an actual word. If this setup aligns with the existing performance, readers can assume that CLEAR is truly evaluating the reasoning ability.\n\n3. Can you add more reasoning paradigms, such as deductive reasoning, in the benchmark? I think you can provide unseens rules as premises and see whether LLMs can reason over these premises. This is an important reasoning paradigm when you want to evaluate the reasoning ability, especially when LLMs cannot do well in it." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper studies an overlooked problem in current reasoning tasks that they, to some extent, rely on the internal knowledge of the LLMs, rather than completely on the reasoning ability.\n2. CLEAR evaluates the reasoning ability in a comprehensive way, supported by staged tasks that incrementally increase in complexity. It provides a nuanced understanding of the model's capability.\n3. The evaluation is comprehensive, covering a diverse range of models, and it points out the substantial room for improvement in reasoning capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new reasoning benchmark CLEAR, aimed to assess the ICL and reasoning ability of LLMs.\nThey evaluate these abilities through introuducing new information, specifically, a new language.\nGiven the new information, the LLM should derive logical rules and update the meaning of tokens based on new information.\nThe results show that this is a challenging tasks for recent LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The size of the dataset is limited, which is also reflected on the analysis of prompt complexity. This limitation is noted in the paper but may affect the robustness and generalizability of the findings.\n2. There is a potential problem of the entanglement of tokenization and reasoning in this particular tasks. I think further analysis is required rather than just mentioning it in limitations.\n3. CLEAR mainly focuses on inductive reasoning, but general reasoning often involves deductive and abductive reasoning. Improving the diversity of the task forms could significantly increase the impact." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Are there plans to expand the dataset to include a broader range of Conlangs with varying linguistic complexities? \n2. Do you have more insights on failure analysis to show if the error of existing models comes from not being able to understand the grammar, or maybe bias towards the language they're pre-trained on? For example, provide step-by-step accuracy for each question. This could also help to extend the scale of the benchmark. As mentioned in weakness 2, do you have reasoning paths generated from each LLM? What's the error rate for each step? Also, what is the distribution of different error types, e.g. mismatching between words (vocabulary), failure to identify plural (grammar), etc? \n3. The identification of the translation direction's strong impact is a good direction to look into. Could the authors provide more error analysis to illustrate specific difficulties models face in this direction? Are there common patterns in errors that suggest improvements in task framing or prompting?\n4. In section 4.3, you briefly mentioned the structure of the prompt including the system prompt and few-shot examples. From what I understand the examples are like the ones shown in Figure 2. It'd be better if you could include the system prompt you used in the paper as well to show how the model is guided for this specific task." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. CLEAR is an innovative benchmark that addresses the limitations of current benchmarks by testing logical reasoning without relying on prior exposure to specific linguistic patterns. The idea of using Conlangs ensures that the task requires genuine reasoning rather than memorization or retrieval, which helps to better evaluate such ability of LLMs.\n2. The paper is thorough in constructing a well-defined benchmark, with clear methodologies for dataset creation and evaluation. The variety of evaluation metrics and the stratification of translation tasks by difficulty level add rigor to the framework, enhancing the robustness of the results. Visuals such as example translation tasks (Figure 1) and the ranking results (Table 1) are useful in illustrating the key aspects of CLEAR and help readers understand the task structure." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces CLEAR (Conlang Logic Evaluation And Reasoning), a new benchmark aimed at evaluating the reasoning capabilities of large language models (LLMs) using translation tasks between constructed languages (Conlangs) and English. Unlike natural language tasks, Conlangs present unique challenges, as they are intentionally designed to be outside the training data, reducing the risk of memorization and promoting logical reasoning. The benchmark includes translation tasks from English to Conlang and vice versa, with increasing complexity in translation rules. Popular LLMs, both closed-source and open-source, are evaluated on CLEAR, and multiple evaluation metrics, such as BLEU, ROUGE, METEOR, and character-based metrics, are applied to assess model performance on logical reasoning within language translation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. With repeated exposure to Conlang structures, LLMs may develop specialized reasoning paths tailored to the benchmark, rather than generalizable reasoning skills. The limited dataset could falsely encourage models to adapt to specific patterns in the Conlangs, thereby reducing the benchmark’s efficacy in evaluating general reasoning capabilities.\n2. The benchmark focuses on output accuracy without providing insights into the reasoning paths which is crucial for evaluating the reasoning ability. As the grammar and structure of Conlang are relatively simple, including an analysis of intermediate reasoning steps could reveal where models tend to struggle, offering diagnostic insights beyond final translation accuracy. There are related previous works like chain-of-thought, and least-to-most that prompt the model to show explicit reasoning paths. In your evaluation, do you ask the model to generate reasoning paths like in Figure 2, or it only outputs the answer? Some analysis on such paths could improve the quality of the work. \n3. As mentioned by the authors, the size and task of the benchmark is quite limited. This would affect the robustness of the benchmark and could make it harder to generalize the findings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tThis paper provides a compelling approach to evaluating LLMs on unfamiliar language tasks, offering new insights into model capabilities beyond conventional datasets.\n2.\tThe experiment design is sufficiently thorough to support the paper’s objectives, ensuring a robust evaluation of the models' in-cotext learning capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces CLEAR, a benchmark designed specifically to assess the translation and reasoning capabilities of LLMs in novel tasks. This benchmark evaluates LLMs through few-shot translation tasks using constructed languages (conlangs)—artificial languages crafted to be unfamiliar and absent from model training data. By engaging models in translation tasks that combine logical reasoning with pattern recognition, CLEAR aims to evaluate the models’ abilities to infer grammatical rules and apply logical operations without relying on prior knowledge." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe \"learning a new language\" task feels more akin to in-context learning, emphasizing the models’ imitation and induction abilities rather than the “reasoning abilities” claimed in this paper. While \"reasoning\" is a term that can describe various tasks and skills, its use here in a “translation-like” task seems unsuitable.\n2.\tThe paper devotes excessive space to fundamental concepts. For instance, the section on few-shot prompting in related work and the descriptions of common metrics like BLEU could be more concise. Additionally, sections on logical reasoning in related work, including *Logical Reasoning of Large Language Models* and *Logical Reasoning*, are somewhat repetitive and lengthy.\n3.\tThe paper is somewhat difficult to follow. The task remains unclear even after reading the introduction, with clarity only starting to emerge in Figure 2. Several typos are present, such as “To‘learn’” in Line 72. Additionally, the text in Figure 1 is too small to read comfortably. Page 8 would benefit from reorganization if it only includes two tables.\n4.\tThe analysis is somewhat superficial, focusing mainly on translation direction without offering deeper insights to inform future work. Also, could the unexpected relationship between model performance and question complexity be due to an ineffective measure of complexity?\n\nOverall, I feel the current version seems rushed and does not meet the average ICLR standard." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present CLEAR, a new benchmark to evaluate the reasoning capabilities of LLMs through the translation of novel constructed languages." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024clear,\ntitle={{CLEAR}: Understanding the Reasoning Capabilities of Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3LnTTHDWER},\nnote={under review}\n}" }, "abstract": { "value": "Despite significant progress, accurately assessing the reasoning capabilities of Large Language Models (LLMs) remains both a challenging and divisive subject.\nMany existing benchmarks either suffer leakage, or reflect patterns in the training data, leading to ambiguous results.\nWe present CLEAR (Conlang Logic Evaluation And Reasoning), a novel benchmark designed to test the reasoning and problem solving capabilities of LLMs in new environments.\nCLEAR uses Conlangs (Constructed Languages) for few-shot translation tasks,\nwhich require some linguistic knowledge to solve, but primarily the ability to make new patterns from tokens in unfamiliar contexts using logical operations.\nThese conlangs represent a unique challenge, as while translation examples are plentiful, these conlangs each have a unique combination of rules, are self contained, and are absent in the training corpus.\nWe present an evaluation of current frontier models over multiple metrics as a baseline for future research. \nWe will be releasing \\dataset as a public benchmark to drive progress towards AI systems more capable of general reasoning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLMs", "dataset", "benchmark", "translation", "in-context-learning", "few-shot" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2094a6e2469da96c671282e3b5c2af01985c95b9.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "CLEAR: Understanding the Reasoning Capabilities of Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3M3jtMDjUb
RelChaNet: Neural Network Feature Selection using Relative Change Scores
main
Active
Feature Selection;Neural Networks;Pruning
interpretability and explainable AI
3;5;5;5
4;5;4;3
1;2;3;2
1;2;2;1
3;2;2;3
4.5
4
2
1.5
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How do RelChaNet and other competing feature selection methods perform on complex datasets such as CIFAR-10, CIFAR-100, and Imagenet?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Given the ever-increasing computational demand in the deep learning field, RelChaNet addresses the critical need to reduce this load by proposing a novel deep learning feature selection method.\n- The authors conduct experiments across a broad range of competing feature selection methods and a diverse set of data domains.\n- RelChaNet (flex) demonstrates strong performance and robustness by outperforming other evaluated methods on 7 out of 9 datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces RelChaNet, a novel feature selection algorithm leveraging neural networks. Key innovations include neuron pruning and regrowth mechanisms focused on the input layer. The pruning process uses a \"relative change score\", measuring the impact each feature has on the network's structure and function after its inclusion. Unique to RelChaNet is the flexibility to adapt input layer size dynamically during runtime, enhancing the algorithm's adaptability to varied datasets.\n\nThe method was benchmarked against other state-of-the-art feature selection algorithms on nine datasets, showing superior performance in terms of predictive accuracy, especially on datasets with more samples than features, achieving a 2% improvement on MNIST. However, RelChaNet exhibits comparable performance on datasets with more features than samples. Notably, it also offers competitive computational efficiency, making it a robust alternative for neural network-based feature selection tasks" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The primary weakness of this paper, as I see it, is that it evaluates RelChaNet on datasets that do not intrinsically demand the non-linear feature selection capabilities that deep learning methods like RelChaNet are designed to offer. For example, MNIST is a well-understood dataset where simpler, linear methods often perform exceptionally well. Linear methods like PCA, for instance, achieve 98.0% accuracy on MNIST with K=25 features when using the SVC downstream learner, notably outperforming RelChaNet’s ~93% accuracy. This raises questions about whether RelChaNet’s deep learning-based approach is meaningful for such datasets and whether it would generalize well to more complex, non-linear datasets (e.g., CIFAR-10, Imagenet).\n\nTo demonstrate the effectiveness of RelChaNet, the evaluation should focus on datasets with complex, non-linear relationships where simpler methods struggle." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1) Can you try to enhance Section 3 by describing it with a proper mathematical formalism?\n\nQ2) Which is better the main algorithm proposed in Section 3 or its extension from Section 3.1? Proposing two new algorithms which perform relatively similar is confusing.\n\nQ3) Is it the case that the proposed approach underperforms on the widest dataset (about 50k features) because the random growth needs more training epochs to explore this very large search space? Did you try to train longer in a systematic manner for this dataset? Perhaps by creating an artificial dataset you may be able to perform a more granular analysis on how well the proposed method scales with the number of features and samples?\n\nQ4) The computational analyze from Section 4.2 seems a bit forced. Probably, it would be fairer to try using relatively similar network sizes for all methods and report of course also their accuracies. Also, the sparse networks are really sparse or simulated with binary masks?\n\nQ5) As far I was able to understand the work is about supervised feature selection. Can you please clarify?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1) The proposed method is novel.\n\nS2) Random neuron regrowth likely helps reduce bias in the final ranking of input features.\n\nS3) The proposed method obtains a beneficial performance improvement on top of the state-of-the-art as illustrated on several datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new algorithm for feature selection using Multi Layer Perceptron and a prune and regrowth strategy for the neurons from the input layer. The empirical evaluation shows that the proposed method outperforms several state-of-the-art baselines in a substantial number of scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1) The paper is somewhat difficult to read. Particularly, Section 3 is a bit hard to read due to too much text and details.\n\nW2) \"The paper needs careful proofreading. Some statements are unclear or inaccurately phrased. E.g., lines 44-45, Mocanu et al. 2018, Evci et al. 2020, employed connections pruning and regrow directly, while neuron pruning, and regrowth become rather an indirect output; lines 124 -> this is rather structured sparsity" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does RelChaNet perform on very high-dimensional data with more features than samples? Although the algorithm shows effectiveness on “long” datasets, further validation on “wide” datasets would provide a more complete view of its generalizability.\n- What impact does the randomness in neuron regrowth have on feature selection stability? Since neurons are randomly regrown, it would be useful to understand how this randomness affects the repeatability of selected features and model accuracy.\n- How does the algorithm’s computational efficiency compare with other pruning-based feature selection methods? Given its relative complexity, a comparison of runtime across similar algorithms would be helpful in evaluating RelChaNet’s scalability.\n- Could hyperparameters like cratio and nmb be optimized for specific types of datasets? Insights into parameter tuning would provide valuable guidance for applying RelChaNet in various contexts, especially for practitioners without prior knowledge of optimal settings." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper presents extensive results across diverse datasets, showing superior performance over baseline feature selection methods and emphasizing improvements in interpretability and computational efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel feature selection method designed for dense neural networks. RelChaNet leverages neuron pruning and random regrowth at the input layer, selecting features based on a relative change score calculated from gradient sums over multiple mini-batches. Experiments across nine datasets demonstrate that RelChaNet generally outperforms existing feature selection techniques, particularly enhancing accuracy by 2% on MNIST. The paper also introduces an adaptive extension, “RelChaNet flex,” which adjusts the input layer size dynamically based on validation loss trends." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The applicability of this method is uncertain. In some cases, neurons may exhibit monosemanticity (e.g., in a neural network performing simple arithmetic tasks, where each neuron has a clear, isolated role). However, in other cases, groups of neurons may collectively capture shared or complex features. This method seems most effective when monosemanticity is prevalent in the dataset, and it may struggle with datasets that contain intricate concepts requiring shared neuron activation.\n- The experiments focus primarily on datasets with more cases than features (“long” datasets). To strengthen the evaluation, RelChaNet should be tested on additional “wide” datasets to assess its performance on high-dimensional data. Additionally, the current experiments use relatively simple datasets. Expanding the evaluation to include more complex datasets, such as ImageNet, would help demonstrate the method’s robustness in handling challenging data.\n- The paper lacks a theoretical explanation for the random neuron regrowth process. Without a clear rationale, the consistency and predictability of the feature selection results may be affected." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The method proposed in this paper is very simple and easy to implement." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The primary focus of this paper is on feature selection algorithms in neural networks. Thus they introduce RelChaNet, a feature selection algorithm that uses neuron pruning and regrowth in the input layer of a dense neural network." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1、The solutions in this paper are almost identical to methods like dropout, pruning, and regularization in neural networks to prevent overfitting, making it difficult to identify the novelty of the proposed approach.\n2、The effectiveness of the proposed method is also not better than the state-of-the-art (SOTA) results.\n3、The threshold C_ratio in the feature selection algorithm lacks theoretical guidance or a defined method for setting it.\n4、The quality of English writing in the paper needs improvement.\n5、The paper lacks an evaluation and summary of related work, as well as an explanation of the challenges present in the problem." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "RelChaNet is a novel feature selection algorithm using neuron pruning and regrowth in the input layer of a neural network." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024relchanet,\ntitle={RelChaNet: Neural Network Feature Selection using Relative Change Scores},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3M3jtMDjUb},\nnote={under review}\n}" }, "abstract": { "value": "There is an ongoing effort to develop feature selection algorithms to improve interpretability, reduce computational resources, and minimize overfitting in predictive models. Neural networks stand out as architectures on which to build feature selection methods, and recently, neuron pruning and regrowth have emerged from the sparse neural network literature as promising new tools. We introduce RelChaNet, a novel and lightweight feature selection algorithm that uses neuron pruning and regrowth in the input layer of a dense neural network. For neuron pruning, a gradient sum metric measures the relative change induced in a network after a feature enters, while neurons are randomly regrown. We also propose an extension that adapts the size of the input layer at runtime. Extensive experiments on nine different datasets show that our approach generally outperforms the current state-of-the-art methods, and in particular improves the average accuracy by 2\\% on the MNIST dataset. Our code is available in the supplementary material." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Feature Selection", "Neural Networks", "Pruning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a5f8e3c1b961ad173309e3d322bfd0a266deb4cf.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/147701bc3e6836dcf9b2fa418b34e1bf12ca4d2a.zip" }, "title": { "value": "RelChaNet: Neural Network Feature Selection using Relative Change Scores" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3MDmM0rMPQ
Inverse Prompt Engineering for Task-Specific LLM Safety
main
Active
guardrails;safety;robustness;alignment
alignment, fairness, safety, privacy, and societal considerations
3;3;3;3
3;3;4;4
2;3;2;2
2;3;1;2
2;1;1;2
3
3.5
2.25
2
1.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "It would be interesting to consider some adaptive attacks that are particularly tailored to the task-specific safeguards proposed in the paper. For example, when the safeguard is tailored to only allow travel assistant-related questions, adversaries can also obfuscate a harmful request in a travel assistant context. For example: \"I want to travel to Tokyo, but I don't have enough money to buy my airline tickets. How can I sneak in a flight without paying the ticket?\" In this context, the harmful input is now in-scope of the use case. Would the task-specific safeguard outperform the general-purpose safeguard?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. In general, the task-specific safety proposed in this paper is a novel idea to me. It also makes a lot of sense, and I think it may be a promising direction for safeguarding LLMs in many narrowly scoped contexts. \n\n2. The proposed approach can also directly make use of the existing prompt engineering data, making it data efficient." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to approach task-specific safety. Specifically, the key idea of task-specific safety is that if a model is well-scoped to a more specific downstream use case (e.g., travel assistant), its safety can be defined more aggressively --- as long as the user request is out-of-scope for this specific downstream use case, the model should reject it. The authors argue that this is aligned with the principle of least privilege in computer security, and this approach also enables the model to reject many jailbreak prompts more effectively. The authors also propose a new approach --- Inverse Prompt Engineering (IPE) --- for building such task-specific safety guardrails." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **The presentation needs improvement.** The introduction of the Inverse Prompt Engineering (IPE) approach is poorly motivated and comes very abruptly from my perspective. I feel confused about why we need the particular IPE approach to build the task-specific safety guardrail. Is it because the approach is to prompt a model to filter out harmful inputs/outputs, and therefore, we need a good prompt to do so? The authors didn't first well define what the safeguard actually is, and directly start a lengthy introduction to a prompt engineering approach, which makes me confused and get lost. The authors should consider improving the presentation to make the logic flow clearer. \n\n2. **It's unclear how important the IPE is.** The paper does not sufficiently explain why this particular IPE approach is needed. To implement the task-specific safeguard, why not use fine-tuning and few-shot in-context learning, but need a new prompt engineering approach? The experiments in this paper neither sufficiently compare IPE with other alternative approaches. Given that the IPE is claimed to be a major contribution of this paper (and also reflected as a part of the title of the paper), the authors need to clearly clarify and prove that IPE is an actually important component here. \n\n\n3. **The experiment setting seems to be overly simple.** The paper only considers a synthetic scenario of building a travel assistant. All the data points are purely generated by a language model. It's unclear whether this single scenario can generally represent the effectiveness of this approach across the vast landscape of various downstream use cases. It's also unclear whether the synthetic scenario can be a good characterization of the practice. While I understand that it may be unrealistic to demand that the experiment fully align with real-world settings, given that the paper aims to enhance safeguards by moving from deny-lists to allow-lists, it's crucial to ensure that the approach does not result in an unmanageable increase in false positives. Achieving this requires a more comprehensive testing framework.\n\n4. **It would be good to have a conclusion and discussion section to summarize the paper.** The paper ends very abruptly with the experiment section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "What is the methodology for collecting the so-called prompt engineering data?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed scenario and motivation are meaningful, particularly in designing specific defense mechanisms for task-specific tasks without requiring additional data collection." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method that limits large language models (LLMs) to only what is necessary for the task, in order to build automatic, task-specific safety guardrails." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The organization and writing of the paper are inconsistent. It lacks a conclusion section, and there is excessive whitespace on the third page. Additionally, the formatting of foreign language inputs in the appendix needs adjustment, as the current version does not display correctly. Furthermore, the equations are missing punctuation (e.g., Eq. 3, Eq. 4).\n2. The value for \"Unique successful jailbreaks\" should be greater than 0; however, the error bars in Figure 6 fall below 0, which raises doubts about the validity of the experimental results presented in the paper.\n3. The paper needs to more clearly articulate its contributions.\n4. The title is somewhat confusing; it should clarify whether it addresses attacks or defenses." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "To the best of my knowledge, the idea of training a task-specific reward model to detect jailbreak attacks is novel.\n\nThe paper demonstrates the effectiveness of IPE, where a GPT-2-sized model achieves even better results than some commercial moderation APIs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors introduce a method called Inverse Prompt Engineering (IPE) to defend against jailbreak attacks and harmful requests. The core idea is to generate synthetic data and train a reward model that assigns a high score if the generated output follows the task-specific user prompt. This reward model can then be used to detect unsafe responses generated by the system. Through comprehensive experiments, the paper demonstrates the effectiveness of IPE." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major concern: The authors state, \"However, IPE demonstrates excellent transfer resistance, with no successful transfers across all iterations.\" I am unclear on how the authors construct the black-box attacks. It seems counterintuitive that attacks could achieve nearly a 100% success rate on one model but fail to transfer effectively to another model with the same setup, where the only difference is the random seed.\n\nThe presentation could be improved. \nFigure 1 is confusing; it would be clearer to create a figure that demonstrates the algorithm step-by-step with more detailed illustrations. For example, it would be helpful to show how a synthetic collection of alternative prompts is obtained.\n\nThe proposed method is limited to a single task; could it generalize to multiple tasks? \n\nThe paper lacks a conclusion section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How well does IPE generalize to completely new types of jailbreak attacks that were not seen during training? Have the authors tested the system’s ability to defend against more sophisticated or adaptive attack techniques?\n\n2. Since IPE relies on task-specific reward models, how scalable is the method across multiple tasks or domains? Would a new reward model need to be trained from scratch for each application, or is there potential for transfer learning between related tasks?\n\n3. The paper does not mention performing a sensitivity test on the threshold for the reward model when filtering responses. How sensitive is the system’s performance to changes in this threshold, and how do you determine the optimal threshold value for different tasks or contexts?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ IPE introduces a novel allow-list approach to LLM safety by restricting responses to those aligned with the task's intended behavior, which contrasts with the more commonly used deny-list methods. This proactive approach could potentially be more effective in preventing harmful outputs.\n\n+ IPE leverages existing prompt engineering data, eliminating the need for additional data collection. This makes the method lightweight, cost-effective, and easy to integrate into existing workflows." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Inverse Prompt Engineering (IPE) as a new method to create automatic, task-specific safety guardrails for large language models (LLMs). The core idea of IPE is to operationalize the principle of least privilege, a concept from computer security, by restricting the model’s behavior to only what is necessary for a specific task. Instead of blocking predefined harmful behaviors through deny-lists, IPE uses an allow-list approach. This method starts by using existing data generated during prompt engineering to train task-specific reward models that filter responses. Then only completions that pass the reward model’s threshold are allowed as responses. \nIn their experiments, the authors demonstrate IPE’s effectiveness in defending a chatbot application from jailbreak attacks. Specifically, they applied to a travel assistant chatbot, IPE achieved a 98% reduction in successful jailbreak attacks compared to baselines. They also evaluate the IPE’s performance on defending against two jailbreaking attacks: GPTFUZZER and GCG." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Since the reward model is trained on specific types of jailbreak attacks and benign prompts, the IPE approach may not generalize well to unseen attacks that the reward model was not trained on. This could leave the system vulnerable to emerging types of attacks.\n\n- IPE is designed to be task-specific, meaning that the reward model must be trained for each new task or application. This introduces a limitation in scalability, as new reward models need to be developed for different contexts or domains.\n\n- The quality and diversity of the training data directly influence the effectiveness of IPE. If the data used for prompt engineering is not diverse enough or fails to cover edge cases, the system may struggle to defend against more complex or subtle jailbreaks.\n\n- In the experiment, successful jailbreaks were verified through manual inspection. This manual step introduces subjectivity and may not scale well in real-world applications." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a technique that uses prompt engineering to build task-specific safety guardrails for LLMs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024inverse,\ntitle={Inverse Prompt Engineering for Task-Specific {LLM} Safety},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3MDmM0rMPQ},\nnote={under review}\n}" }, "abstract": { "value": "Most real-world deployments of large language models (LLMs) operate within well-scoped tasks, yet current safety measures are general-purpose and fail to leverage this information. As a result, even in narrowly-scoped tasks, LLM applications remain vulnerable to adversarial jailbreaks. In these settings, we argue that task-specific safety guardrails solve a more tractable problem than general-purpose methods. We introduce Inverse Prompt Engineering (IPE) as an initial approach to building automatic, task-specific safety guardrails around LLMs. Our key insight is that robust safety guardrails can be derived from prompt engineering data that is already on hand. IPE operationalizes the principle of least privilege from computer security, restricting LLM functionality to only what is necessary for the task. We evaluate our approach in two settings. First, in an example chatbot application, where IPE outperforms existing methods against both human-written and automated adversarial attacks. Second, on TensorTrust, a crowdsourced dataset of prompt-based attacks and defenses. Here, IPE improves average defense robustness by 93\\%, using real-world prompt engineering data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "guardrails", "safety", "robustness", "alignment" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2b3bc714b1bd54d8f1af82f18900a8fbe4a56881.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Inverse Prompt Engineering for Task-Specific LLM Safety" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Mia9aFpgo
GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models
main
Active
llms;vlms;prompt optimization
foundation or frontier models, including LLMs
3;5;5;8
5;3;4;3
2;2;2;3
1;3;3;3
2;2;4;3
5.25
3.75
2.25
2.5
2.75
-0.802181
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* **Clarity of Algorithm 1**: At lines 9, 12, 28, 29, it's unclear what is the meaning of the square brackets, given that $K$ is an integer according to the input. It's also not clear how the top-3 prompts used for ensemble are selected: Are they from the last step, a single best step, or all steps through out the search process?\n\n* **Sensitivity to the hyper-parameters**: The LLM steering part introduces two hyper-parameters, layer $l$ and scaling factor $\\alpha$. Are these hyper-parameters searched on one dataset and kept the same for the others, or searched independently on each dataset? How different could the optimal choices be over different datasets?\n\n* **Additional placeholders in prompts**: Some searched prompts (e.g., at Ln 1187-1192) seem to contain additional placeholders (in angle brackets). Are they from additional metadata of the dataset? Is the search process altered in any way to account for these additional information?\n\n* **Average accuracy in main tables** (e.g., Table 1, 2) would make it easier to cross-reference the results with the ablation studies (e.g., Table 4)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The motivation is sound and clear.\n\n* The experimental results are transparent, with search outcomes in the appendix and code release promised.\n\n* The prompts generated by search are shown to generalize within the same category (dual encoder) of VLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes GLOV, an LLM-assisted framework for automatic optimization of VLM prompts for specific downstream tasks. Specifically, an LLM is meta-prompted to generate downstream VLM prompts based on task description and in-context examples from previously generated prompts as feedbacks. On top of the meta prompt, the paper also applies feature-level guidance, i.e., the difference of sentence embedding from bad prompts to good prompts, as a second measure to push the LLM output towards the more effective direction. The proposed method is evaluated mainly on 16 few-shot classification tasks and shows improvement over baselines, while preliminary results on VQA are also provided." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* **Feature-level guidance poses white-box LLM constraint**: Despite the feature guidance being novel for VLM prompt optimization, it requires access to LLM intermediate features, which could be hard to obtain given that many strong LLMs are either closed-source or too large to easily run locally. This could be a hedge against the advantages of black-box methods, as the LLM intermediate features could be even harder to get than parameters or gradients of VLMs in many cases.\n\n* **Sensitivity to LLM choices is not clear**: While the proposed method shows clear improvements, it would make the argument stronger if more evidence could be given showing that the reasoning of the LLM indeed plays an important role, especially with the fluctuation (e.g., Fig 1, 3, 6) of the results and the general impression that LLMs at 7B-level are not very good at reasoning or agent-like applications. One way to show this is higher accuracy or less steps to convergence with a stronger LLM." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* In the context of encoder-decoder architectures, is there a potential for the emergence of lengthy and ambiguous symbolic representations during the optimization process? Furthermore, what measures can be implemented to ensure the efficacy of sentence transformers under such circumstances?\n\n* The reviewer expresses concern that the adoption of top-k and bottom-k approaches for in-context examples may result in a significant disparity between positive and negative samples in the later stages of training, potentially hindering the model to learn subtle prompt refinements akin to the challenges posed by consistently employing a large learning rate in gradient-based optimization. Consequently, the reviewer prefers implementing a dynamic selection threshold as a more reasonable choice. Any insights regarding the current strategy would enhance the understanding of the paper.\n\n* Similarly, in the steering of LLM, would it be more judicious to dynamically modify the rank interval between the positive (P+) and negative (P-)?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The introduction of steering LLM response during the prompt optimization process presents a novel and effective methodology.\n* The steering strategy designed by analogy with the gradient update process, while lacking a rigorous theoretical basis, conforms well to engineering intuition.\n* The article is highly readable, featuring a well-defined and clear structure." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel framework GLOV, which enables LLMs to act as implicit optimizers for VLMs to enhance downstream vision tasks. Experiments highlight the effectiveness of GLOV for both dual-encoder and encoder-decoder architectures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The applicability of GLOV optimization to a given task is constrained by the existence of an objective fitness function for the task.\n* For encoder-decoder models such as LLaVA, it seems the VLM response has to be relatively concise in form. When dealing with complex responses (such as responses for image captioning tasks), the reliability of the sentence embeddings computed via Equation 3 remains unverified." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please kindly answer the question in the weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper introduces an interesting framework, GLOV, to enhance VLMs for image classification.\n- The use of meta-prompts and LLM steering provides fresh insights in this field.\n- Experimental results demonstrate the effectiveness of the proposed methods compared to baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces GLOV, a method that uses Large Language Models (LLMs) as implicit optimizers to improve Vision-Language Models (VLMs) for vision tasks. GLOV employs meta-prompts to help the LLM generate effective prompts tailored to specific tasks, which are ranked for their effectiveness. It incorporates feedback from previous optimization steps through an offset vector that guides the LLM in generating better prompts. Experiments on 16 datasets demonstrate the effectiveness of proposed methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of comparison. While GLOV approaches image classification from a novel perspective, previous works [1,2,3] in this area already achieved promising results with lower costs. Could authors provide a comparison of GLOV with these baselines?\n- The generalization ability of GLOV is not clear. The authors demonstrated the effectiveness of the proposed methods on VLM image classification and visual question answers under the same topic. However, if the GLOV is not competitive compared with other works focused on VLM image classification[1,2,3]. Could authors prove the generalization ability of GLOV on visual tasks beyond image classification?\n- Clarity of Figure 2: The method overview in Figure 2 is difficult to understand. If the authors could clearly show the flow of iterative optimization, the methods would be easier to follow.\n- Lack of inference cost comparison: Could the authors show the curve of iteration steps and inference time to illustrate the trade-off between performance and cost in GLOV?\n\n\nReference:\n[1]AWT: Transferring Vision-Language Models via Augmentation, Weighting, and Transportation\n[2]Sus-x: Training-free name-only transfer of vision-language models\n[3]Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "For few-shot learning, the impact of uniquely labeled data on performance is significant. In this paper, how to select this sample to ensure that the reported results are statistically significant rather than random? What is the variance of performance if five times of experiments are conducted?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "[+] Improving the generalization of VLM in downstream tasks with low cost (parameters, fine-tuning data, training speed) is one practical topic that deserves further explorations.\n\n[+] The paper is easy to follow and understand, having clear logic.\n\n[+] Some experiments are conducted to demonstrate the idea’s performance, as well as the values of optimal prompt search." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to improve the VLMs’ performance in downstream tasks. One prompt optimization method, namely GLOV, is proposed by meta-prompting LLM to choose the type of language structure preferred by the downstream VLM. At each optimization step, one embedding space steering methodology is used to bound the outputs more strictly. Empirical experiments with different VLM architectures on multiple datasets show the GLOV’s effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "[-] Novelty. Meta-prompts have been introduced by [1], while this paper expands the idea to a few-shot training data, which is rather trivial and brings minor technological contributions to the community. For the designs of meta-prompts, how to verify that this solution is optimal?\n\n[-] Impracticality. As we all know, due to the autoregressive paradigm, the LLM inference requires a significant amount of cost compared to encoder-based single-forward models. Thus, employing LLMs in an iterative workflow, to find optimal language prompts seems unrealistic for real-world applications. \n\n[-] Unknown efficacy. In the main manuscript, only the performance of downstream tasks is reported, without any computational/time complexity. The reviewer suggests to provide the inference time (in seconds) and the required GPU memory (in GB) for all methods in Table 1-2 to clarify its practical value.\n\n[-] Incomplete comparisons. To improve the model performance on downstream tasks, one popular and effective idea is parameter efficient fine-tuning (PET), such as prompt-tuning, LoRA, and adapter, which has shown impressive few-shot capability. In Table 5 of the supplementary materials, CoOp performs even worse than CLIP, which is surprisingly and suspiciously. It is necessary to compare PET with the proposed method, in terms of performance, parameters, training time, and inference speed of under the same settings. \n\n\n[1] Meta-Prompting for Automating Zero-shot Visual Recognition with LLMs." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "GLOV uses LLMs as implicit optimizers for VLMs, generating and refining task-specific prompts, leading to up to 57.5% improved performance on diverse vision tasks." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024glov,\ntitle={{GLOV}: Guided Large Language Models as Implicit Optimizers for Vision Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Mia9aFpgo},\nnote={under review}\n}" }, "abstract": { "value": "In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. \nOur GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). \nThese prompts are ranked according to a purity measure obtained through a fitness function. \nIn each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of text prompts preferred by the downstream VLM. \nFurthermore, we also explicitly steer the LLM generation process in each optimization step \nby specifically adding an offset difference vector of the embeddings from the \\textit{positive} and \\textit{negative} solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step.\nThis offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. \nWe comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models --\nshowing that the discovered solutions can enhance the recognition performance by up to $15.0$% and $57.5$% ($3.8$% and $21.6$% on average) for these models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "llms", "vlms", "prompt optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d6b0cf06c989d94e541b9289413bf045dd58ce97.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/cb12045f2dbf2fc4cca8fa8ba58d82592c99b2f5.zip" }, "title": { "value": "GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3MnMGLctKb
Generating Multi-Modal and Multi-Attribute Single-Cell Counts with CFGen
main
Active
scRNA-seq;Flow Matching;generative modeling;multiomics
applications to physical sciences (physics, chemistry, biology, etc.)
5;5;6;6
5;3;3;4
2;3;3;2
2;3;3;3
3;3;3;3
5.5
3.75
2.5
2.75
3
-0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- For batch correction, is CFGen's performance (in terms of the Batch and Bio scores) sensitive to varying the guidance parameters? How does one tune the guidance parameters in practice?\n- For cell type classification, simple models such as logistic regression (with or without regularization) are often used. Does data augmentation with CFGen improve performance for a logistic regression model?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Adapting flow matching for single-cell data generation is a novel contribution.\n- The proposed framework CFGen can be easily adapted for different uni- and multi-modal scenarios, as long as there are modality-specific autoencoders with a common latent space." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes CFGen, which is a latent flow-matching generative model for single-cell data, where the latent space is first learned by an autoencoder. To capture statistical properties specific to single-cell data, the autoencoders learn to decode the parameters of a negative binomial distribution and Bernoulli distribution, for RNA-seq and ATAC-seq data, respectively. Conditional generation is achieved through classifier guidance. Empirical results demonstrate that CFGen outperform other single-cell generative models in terms of (1) data generation to approximate the real data distribution, (2) data generation for rare cell type classification, and (3) batch correction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- scVI should be included as a baseline in Figure 2 because scVI accounts for overdispersion and zero inflation, whereas the current baselines in Figure 2 (scDiffusion and scGAN) do not.\n- For downstream applications that rely on conditional generation, it is unclear how the classifier guidance strength is determined.\n- Quantitative results are lacking when evaluating the compositional classifier guidance in Section 5.3. The change in MMD and WD with respect to the target distribution when increasing guidance strength can suffice." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper addresses an important problem in single-cell data generation by generating raw count values, and further extending this to multimodal generation.\n2. The paper is well-written, and the authors convey major limitations of their model clearly.\n3. The results show that CFGen is able to capture characteristics of the training dataset and generate single cell data with similar statistical properties.\n4. They also show the effectiveness of generating rare cell-types to improve classification performance for other models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents conditional flow-based generative models for single-cell RNA-seq and accessibility data. Single cell data is generally sparse, noisy, and has high feature variance. The authors suggest a flow matching based approach as a more expressive, and consistent generative model compared to VAEs, and GANs for generating synthetic cells. They also present a compositional variant of classifier-free guidance for flow-based models to allow conditioning on various attributes. Finally, they evaluate the model on two downstream tasks: (1) generating synthetic samples of rare cell-types and using them for data-augmentation, (2) leveraging CFGen for batch correction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Fig 3. is not really clear to me. Firstly, I suggest adding contrasting colors for points representing generated and real data. Secondly, what are the red points representing? I also suggest perhaps adding a quantitative metric (perhaps a oracle model that predicts the attributes) as well.\n2. I also suggest removing the bars from Fig. 2b as they make it hard to observe the overlapping density curves which are easier to infer from.\n3. For Sec 5.2, it might be worthwhile to also add a comparison with CFGen just trained on RNA-data in order to measure the effects of using multimodal data for training.\n4. A comparison of inference times might also be useful in this case, especially to compare scDiffusion and CFGen, since both require multiple time steps. Adding approximate training times for each of the comparable models would also be valuable.\n5. Fig.4 should also report the raw accuracy numbers for each of the cell-types to evaluate the effect of CFGen," }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can CFGen be applied to gene expression imputation tasks? If so, could the authors describe how the current framework could handle imputation, or if modifications would be needed?\n\n- Could the authors provide details about the computational complexity of the model?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors nicely demonstrate practical applications of their method such as data augmentation in rare cell types, improving downstream classification, and performing batch correction. \n\n- The idea to extend flow matching for generation with multiple attributes is interesting and important for single-cell data.\n\n- The paper is well-written, the related work is appropriately referenced, and the experimental setup is detailed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors of this paper present CFGen, a flow-based generative model designed for multi-modal single-cell data. CFGen addresses the challenges of generating discrete, multi-modal data while allowing conditional generation based on various biological attributes. The model extends the flow matching framework to handle compositional guidance across multiple attributes and provides promising results on tasks like data augmentation, rare cell type classification, and batch correction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors do not discuss the computational complexity of the proposed method. A more detailed breakdown of computational requirements, including training and sampling times for the proposed method and the baselines, would improve the paper.\n\n- One important task in single-cell data analysis is gene expression imputation, where missing or zero-inflated gene expression values are inferred to provide a more complete view of cellular states. It is unclear from the paper whether CFGen can effectively handle this task, given its focus on generating new cells rather than imputing missing data within existing cells. Could the authors clarify if CFGen’s architecture or the flow matching framework could be adapted for imputation?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the \"Weaknesses\" part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The model is tailored to real biological settings: it handles 2 modalities (scRNA and ATAC) and any number of attributes.\n- The results properly support the good performance of the method. \n- Besides generation power, two very interesting applications are demonstrated: handing rare cell types in cell type classification and batch correction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In summary, I initially rate this paper as \"5: marginally below the acceptance threshold\". But I'm open to increase my score if authors properly answer my doubts in the rebuttal.\n\nSummary of paper: The paper proposes a generative model for scRNA as well as accessibility modalities. The model can take in a combination of attributes, which suits the biological settings where for each cell only a subset of attributes are available. The method is evaluated in generation, handling label imbalance in cell type classification for rate cell types, and batch correction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Handling discrete count data via negative binomial distribution is presented as a \"contribution\" of this paper. But there is a plethora of methods that make use of negative binomial (or alternatives like poisson distribution) to handle count data as well as over-dispersion. So why should it be listed as a contribution of this paper?\n- According to the paper, \"... the proposed factorisation is novel\". In the factorisation of Eq. 5 what is the rational behind conditioning the latent factor z on library size?\n- In proposition 1, the attributes $y_1$, $y_2$, ... are assumed to be conditionally independent given $z$, but with the factorisation of Eq. 5 the attributes are connected to $z$, hence $z$ forms a V-structure which according to d-separation causes the attributes to be dependant given $z$ ?\n- Regarding the proposed guidance scheme, the only difference to the normal classifier-free guidance is that only some attributes (and one attribute during training) is fed to the decoder. Is this approach equivalent to the normal classifier-free guidance with all attributes plus some attributes being randomly dropped out? Even if so, it wouldn't decrease the value of the proposed method.\n- In Table 1 scDiffusion is heavily outperformed by the proposed method, but one may say diffusion models may perform on par with flow matching (apart from training stability etc.). In the paper I'd recommend providing an explanation for the superior performance of the proposed method compared to scDiffusion." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We devise a new method to generate discrete multi-attribute and multi-modal single-cell data using Flow matching." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024generating,\ntitle={Generating Multi-Modal and Multi-Attribute Single-Cell Counts with {CFG}en},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3MnMGLctKb},\nnote={under review}\n}" }, "abstract": { "value": "Generative modeling of single-cell RNA-seq data has proven instrumental for tasks like trajectory inference, batch effect removal, and gene expression generation. However, the most recent deep generative models simulating synthetic single cells from noise operate on pre-processed continuous gene expression approximations, overlooking the discrete nature of single-cell data, which limits their effectiveness and hinders the incorporation of robust noise models. Additionally, aspects like controllable multi-modal and multi-label generation of cellular data are underexplored. This work introduces Cell Flow for Generation (CFGen), a flow-based conditional generative model that accounts for the discrete nature of single-cell data. CFGen generates whole-genome multimodal single-cell counts reliably, improving the recovery of crucial biological data characteristics while tackling relevant generative tasks such as rare cell type augmentation and batch correction. We also introduce a novel framework for compositional data generation using Flow Matching. By showcasing CFGen on a diverse set of biological datasets and settings, we provide evidence of its value to the fields of computational biology and deep generative models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "scRNA-seq", "Flow Matching", "generative modeling", "multiomics" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4fe312d96cf92ce55c8b92eb5292983ba919f64b.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Generating Multi-Modal and Multi-Attribute Single-Cell Counts with CFGen" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Mq1tY75nv
Defining and Measuring Disentanglement for non-Independent Factors of Variation
main
Active
disentanglement;representation learning;dependent factors;sufficiency;minimality
other topics in machine learning (i.e., none of the above)
5;5;5;8
3;4;4;3
2;2;3;3
2;3;2;3
3;3;2;3
5.75
3.5
2.5
2.5
2.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper provides a great theoretical framework for understanding the nature of disentanglement. It presents a set of succinct conditions that are essential to disentanglement and condenses them into two highly general notions: minimality and sufficiency.\n- The proposed disentanglement measure does not require independent latent factors, addressing a key limitation of previous approaches.\n- The paper is very well written. It includes an extensive discussion of the related work and a nicely listed discussion on the desirable properties of a disentangled representation. The whole derivation process from the basic properties to the final algorithm is very clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores disentangled representation learning, where the goal is to separate factors of variation in data into understandable parts. Traditional definitions assume these factors are independent, which is often unrealistic in real-world data, limiting their applicability. The authors introduce a new, information-theory-based definition of disentanglement that works even when factors are not independent. They show that this new definition aligns with representations made up of minimal and sufficient variables. Additionally, they propose a method to measure disentanglement in these scenarios and demonstrate its effectiveness in handling both independent and non-independent factors, outperforming existing metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed metrics (minimality and sufficiency) may not be very informative in cases where $y$ can’t be perfectly recovered from $x$. In such cases, no representation of $x$ can achieve 100% minimality and sufficiency, meaning that the factors can’t be perfectly disentangled. As a result, the optimal value of minimality/sufficiency may be different for different tasks. In general, this value is a priori unknown and may not be easy to estimate. This makes it difficult to tell if a representation is good or bad (in terms of disentanglement) if the measurement of minimality/sufficiency yields a medium value (not close to 0 or 1).\n- The measurement algorithms require the ground-truth values of the causal factors $y$, which may be difficult to obtain for many real-world tasks. Moreover, in cases where $y$ has to be estimated from $x$, the estimation quality could affect the measurement of minimality/sufficiency.\n- The metrics are defined as ratios between two mutual information terms in order to scale them between 0 and 1, but is this really a good choice? For example, do $\\frac{I(z_j;y_i)}{I(z_j;x)}=\\frac{0.01}{0.1}$ and $\\frac{I(z_j;y_i)}{I(z_j;x)}=\\frac{10}{100}$ really mean the same thing for minimality? Note that the values of $I(z_j;x|y_i)$ are quite different in these two cases. The definition deserves a more careful discussion.\nIt seems that the experiments only involve problems with a small number of causal factors. Is the proposed measure also accurate in much higher-dimensional spaces? Will there be computational issues? The time complexity of the measurement algorithm is O(mn).\n- Why are the X, Y, and Z in the first paragraph of section 4 in capitals whereas the rest of the paper uses lower cases?\n- Typos: 1. Line 69-70: missing “of” between “degree” and “minimality”; 2. Line 229: have *the* that; 3. Line 4 of Algorithm 4: n should be m." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The experiments involve a relatively small number of factors and simpler representations. Could the authors elaborate on how Minimality and Sufficiency perform in more complex scenarios with larger factor sets and higher-dimensional representations? Are there any known limitations in applying these metrics?\n\n2. While Minimality and Sufficiency appear to offer distinct advantages, are there any scenarios where they might fail to capture disentanglement effectively, or where they might be less reliable than existing metrics?\n\n3. In Section 4.3, the authors mention that Minimality and Sufficiency are better suited for cases where factors are not independent, as they focus on the most influential factors and not comparisons across factors. Could existing methods like DCI be modified to relax the independence assumption? If so, would Minimality and Sufficiency still provide a more effective disentanglement assessment, reinforcing the novelty of proposed metrics?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors propose two predictor-based metrics, Minimality and Sufficiency, for assessing disentanglement without assuming factor independence. Existing approaches often rely on this independence assumption, which rarely holds in real-world data, limiting the applicability of such metrics to narrow and often unrealistic cases. Minimality and Sufficiency are introduced as complementary, yet opposing, principles that balance trade-offs in capturing disentanglement. The authors provide formal definitions and proofs grounded in Information Theory. Through experiments, they demonstrate that existing metrics struggle to measure disentanglement effectively when factors are not independent, whereas the proposed metrics perform better in these more realistic settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the topic of measuring the degree of disentanglement in learned representations. Existing metrics often rely on the assumption that underlying factors are independent, limiting their effectiveness in real-world applications. To address this, the authors propose two complementing metrics: Minimality and Sufficiency. These metrics enable disentanglement assessment without requiring factor independence. Experimental results indicate that Minimality and Sufficiency can identify the presence or absence of disentanglement in scenarios where previous metrics are ineffective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the authors argue that Minimality and Sufficiency metrics are more applicable to real-world scenarios due to their independence from factor assumptions, this claim would benefit from validation on complex, real-world datasets. For example, using datasets like CelebA, which include nuanced features and dependencies, could illustrate the metrics' practical advantages.\n\nMinimality and Sufficiency are properties associate with information bottleneck techniques, yet this connection is not addressed. Expanding the related work section to discuss information bottleneck approaches would strengthen the paper’s theoretical foundation by situating the metrics within established work. Including studies that use information bottlenecks for disentanglement would clarify how Minimality and Sufficiency build upon or differ from these techniques." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Very well-written introduction, with a good explanation of the motivation. In general the background and setup are nicely laid out, which is a service to the community.\n- Overall pretty easy to follow, with clear and convincing arguments in the background/theory section.\n- Interesting ideas that are relevant for the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to define and measure disentanglement in a way that is more relevant for real-world scenarios, where (1) the true generative factors of variation are not necessarily independent, and (2) there are nuisance factors which are not relevant for a given task but may act as confounders. The metrics proposed in this paper are somewhat similar to those proposed by Ridgeway & Mozer (2018) and Eastwood & Williams (2018), but generalized/extended in the 2 ways mentioned above. Factors are still assumed to be independendent here, but only given the observed raw data, while in previous work they were unconditionally independent.\n\nThe authors consider 4 properties:\n1. Factors-invariance: $z_j$ is factors-invariant for the factor $y_i$ if, given $y_i$, there is no mutual information between $z_j$ and all the other factors in $\\mathbf{y}$. This is a direct extension of modularity/disentanglement that accounts for correlations.\n2. Nuisances-invariance: this is the same as factors-invariances, except we replace \"all the other factors\" with \"all nuisance variables\".\n3. Representations-invariance: $z_j$ is representations-invariant for the factor $y_i$ if, given $z_j$, there is no mutual information between $y_i$ and all the other representations in $\\mathbf{z}$. This is a direct extension of compactness/completeness that accounts for correlations.\n4. Explicitness: $\\mathbf{z}$ is explicit for the factor $y_i$ if, given the full representation $\\mathbf{z}$, the data $\\mathbf{x}$ provides no additional information about $y_i$, i.e., $y_i$ is fully described by $\\mathbf{z}$.\n\nThe authors then show that:\n- Minimality is equivalent to (1) and (2) jointly.\n- Sufficiency is equivalent to (3) and (4) jointly.\n\nThey also argue that it is reasonable to focus on minimality and sufficiency, and propose methods to estimate these quantities in practice, and showcase their metrics alongside classic disentanglement metrics in a few toy experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Clarification on minimality and sufficicency.** As far as I understand, minimality and sufficiency are always defined w.r.t. a task, and considering the entire representation. Here, the authors talk about \"task\" to distinguish between relevant factors $\\mathbf{y}$ and nuisances $\\mathbf{n}$. This is consistent with previous work on representation learning that concerns minimality and sufficiency. However, in this paper, the 4 properties (factors-invariance etc.) are linked to minimality and sufficiency in a different context, where the representations $z_j$ are considered separately, and the \"tasks\" are the single factors $y_i$. Is my interpretation correct? Regardless, I think this point should be clarified and made more explicit, to avoid potential misunderstandings. (This could also help give a very precise link between the 4 properties here and the classic disentanglement metrics by Ridgeway & Mozer (2018) and Eastwood & Williams (2018))\n\n**Beginning of Sec. 5:**\n> In this section we use minimality and sufficiency (lowercase) to refer to the properties of section 3 and Minimality and Sufficiency (uppercase) to refer to the metrics of section 4.3.\n\nTwo issues here:\n1. Minimality and sufficiency are not mentioned in Section 3. If the authors mean to distinguish between properties and metrics, then I guess the reference should be to Section 4.1?\n2. Distinguishing only with capitalization sounds like a recipe for misunderstandings. I would suggest using letters/symbols or acronyms, for example.\n\n**Experiments.**\n\nThe theoretical arguments for why these metrics make more sense than existing ones are very clear. The empirical arguments, however, less so.\n\n1. On the comparison between e.g. minimality and DCI disentanglement in Sec. 5.1: at least in the uncorrelated case, DCI should be good (especially with low nuisance strength). And in fact, I believe it is. For example, I disagree with \"only takes high values when factors-invariance and nuisances-invariance levels are very low\" being a problem. Absolute values of metrics are not necessarily as interesting as how they rank different representations. I don't think there's anything inherently more \"accurate\" or \"correct\" about the Minimality metric than e.g. Minimality$^2$. What about showing scatter plots and/or computing (rank) correlations between the metrics? Would it give a more complete picture perhaps?\n\n2. My major concern is that, as I wrote above, I think the most interesting aspect is actually how different metrics rank different representations. And the ground truth to make comparisons with, should not be an abstract metric, but rather a concrete evaluation whose relevance/usefulness the community can agree on. So I think it would be important to investigate if these metrics can be useful in practice e.g. for model selection for downstream tasks (including e.g. generalization, fairness, sample efficiency etc.), as done by quite a few disentanglement papers especially around 2019--2021. So a research question could be: when using disentanglement metrics to cheaply select representations for downstream tasks, does our metric yield better results than others according to some evaluation metric?\n\n3. Another concern I have is that sometimes these proposed metric don't seem to do a very good job, especially in the toy image datasets with correlations between factors. E.g. in Fig. 5 DCI disentanglement seems quite bad, but it's arguably better than Minimality on MPI3D (and it's not the only one). The situation is even worse for the sufficiency-like metrics in Fig. 6, where there's no particularly obvious advantage of using Sufficiency (most curves are relatively close to each other). I suspect the issue might be resolved if the authors clarify their interpretation of Figs. 5 and 6. Otherwise, this is clearly a limitation, since it's even in the correlated case, where these methods should shine. So it should be addressed upfront, and ideally there should be a bit more investigation into why this happens, what impact it might have (but see my paragraph above, i.e. to measure actual impact there's more work to be done), whether something can be done to mitigate it.\n\n4. In addition, I think it would be interesting and useful to consider different ways/degrees of mixing as e.g. in [Eastwood et al. (2023)](https://arxiv.org/abs/2210.00364). But that's perhaps more of a side quest and it's more related to explicitness as defined in that paper (see comment on this below under \"minor\"). I wouldn't prioritize this, although maybe a short discussion in passing could be beneficial.\n\n5. Regarding datasets, why use image datasets (Sec. 5.2) if the images as far as I can see are never used? I agree it doesn't make sense to use images, but then why not generate random low-dimensional data (just the factors) since they just need to be mixed artificially? Again, these datasets would be useful when doing model selection in a practical (toy) setting as in classic disentanglement papers, to see if these metrics can indeed be more useful than previous ones.\n\n**Minor**:\n- end of page 3: \"connected to those described in 3\" is a reference to the current section (also, it should be \"Section 3\" anyway)\n- line 286: \"since this gap can be low for correlated factors even when zj is minimal.\" I would maybe expand a bit on this to clarify\n- There might be some issues/inconsistencies with notation. As far as I understand, $y_j$ is a factor, there are $n$ factors, and $\\mathbf{y} = \\{y_i\\}_{i=1}^n$ is the set of factors -- all this for a single data point, because e.g. $y_i$ is not bold. But then when \"datasets\" appear, it seems that actually $y_i$ was the vector of $i$-th factors from the entire dataset, and when considering a single data point we have the notation $\\mathbf{y}^{(k)}$, where $k=1,...,K$ and $K$ is the dataset size. I think this notation should be clarified earlier in the manuscript.\n- Explicitness as in Ridgeway & Mozer (2018) is a bit misleading, and I think informativeness is much better (on the other hand, note that compactness in my opinion is more descriptive than completeness). Explicitness is also defined as additional metric by [Eastwood et al. (2023)](https://arxiv.org/abs/2210.00364) where perhaps the term explicitness makes more sense.\n\n**In conclusion**, I think these are very interesting ideas and they are well explained. The writing is also overall good. I think the experimental validation is however rather lacking. Note that I'm not asking for any real-world or larger-scale experiments -- I just think to prove the points the authors want to prove, a wider and better-designed experimental study is necessary." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Appendix B.1 is titled *DERIVATION OF THE ESTIMATOR FOR MINIMALITY* and B.2 is titled *DERIVATION OF THE ESTIMATOR FOR SUFFICIENCY.* However, the authors seem to derive upper bounds. Did I understand this correctly? If so, how tight is the bound? Can the authors please elaborate why they believe an upper bound is sufficient for a metric? \n\n- In Section 4.3 The authors introduce $\\bar{m}$ and $\\bar{s}$, writing \"it is also interesting to have a single term that determines how minimal a representation z is.\" \nWhy did the authors decide to go with the max here? Why not use some other kind of aggregation? Are the $\\bar{m}$, $\\bar{s}$ metrics the ones used in the experimental section?\n\n- In the experimental section 5.2, the authors generate a mapping $A \\in \\mathcal{R}^{n_y \\times n_y}$ by fixing the diagonal values to $1- \\alpha $ and sample the non-diagonal values from a uniform distribution between 0 and $\\alpha$. \nDid I understand this correctly? Would then $\\bar{m} = \\frac{1}{n_y} \\sum_{j=1}^{n_y}m_{jj}$ and $\\bar{s} = \\frac{1}{n_y} \\sum_{j=1}^{n_y}s_{jj}$ for $\\alpha < 0.5$ ? If so, why do we see a decrease for $0 < 0.5 < \\alpha$ in (a), (b) and (c) of Figure 5 but not in (d) (e) and (f)?\n\n- Why is the disentanglement metric DCI-D at 0 for dSprites and $\\alpha = 0$? \n\n- In Appendix D, Figure 13, 14 and 15 seem to have the same caption? Is this a mistake? Intuitively I must say that the gradient in Sufficiency looks less pronounced than for other metrics (e.g. Disentanglement). Can the Authors please elaborate on the plots in Appendix D, how do they show the superiority of their metric?\n\n- No standard deviations are provided in Figures 5 and 6 hinting at a single run, which, given the dynamics of the curves, seems insufficient. Either the authors add std's, or they move one of the Figures of Appendix D into the main paper, as it depicts many more runs." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well written in general. The introduction and related work section position the reader adequately. Using the concept of minimality and sufficiency is thought provoking." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose measuring disentanglement through Minimality and Sufficiency. \nThey propose two metrics based on upper bounds of the before mentioned quantities. \nThey demonstrate their metrics on synthetic experiments. They claim that their metrics are better suited for correlated datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- If I understand correctly, the authors decided to use an upper bound for their metric, but this design choice lacks motivation and is only noted in the Appendix.\n\n- The experimental section is, unfortunately, not convincing to me. Multiple design choices of the authors are not motivated. The authors did not investigate their metrics's correlation with down-stream task utility. \nThe authors never looked at actually learned representations, but only used simple linear (and in the first section, trigonometric) mappings. The discussion the authors give is mostly limited to extreme/edge cases of their metric. The metrics overall behaviour is not discussed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024defining,\ntitle={Defining and Measuring Disentanglement for non-Independent Factors of Variation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Mq1tY75nv},\nnote={under review}\n}" }, "abstract": { "value": "Representation learning is an approach that allows to discover and extract the factors of variation from the data. Intuitively, a representation is said to be disentangled if it separates the different factors of variation in a way that is understandable to humans. Definitions of disentanglement and metrics to measure it usually assume that the factors of variation are independent of each other. However, this is generally false in the real world, which limits the use of these definitions and metrics to very specific and unrealistic scenarios. In this paper we give a definition of disentanglement based on information theory that is also valid when the factors are not independent. Furthermore, we demonstrate that this definition is equivalent to having a representation composed of minimal and sufficient variables. Finally, we propose a method to measure the degree of disentanglement from the given definition that works when the factors are not independent. We show through different experiments that the method proposed in this paper correctly measures disentanglement with independent and non-independent factors, while other methods fail in the latter scenario." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "disentanglement", "representation learning", "dependent factors", "sufficiency", "minimality" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c60b6e21f1236d2e79b6a6526bb0777f604fe855.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/db1e505b2c6196ec6296044fb3c99be9ac5b952a.zip" }, "title": { "value": "Defining and Measuring Disentanglement for non-Independent Factors of Variation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3NFtzhFbYM
Dolphin: A Programmable Framework for Scalable Neurosymbolic Learning
main
Active
neurosymbolic learning;scalability;vectorization;differentiable reasoning
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
3;5;6;8
5;3;4;3
1;3;3;3
2;2;2;3
3;2;3;3
5.5
3.75
2.5
2.25
2.75
-0.752618
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to weaknesses 1.\n\nOther questions:\n\n1. What is the limitation of the proposed DOLPHIN?\n2. Are lambda functions fast enough? Do we require doing some acceleration for the proposed operations, such as designing some specific CUDA kernel or triton functions?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors propose an end-to-end neurosymbolic framework that makes all progress differentiable.\n2. With a glance at the provided code, the DOLPHIN is a lightweight implementation of the framework integrated with PyTorch, having a potential opportunity to support the neurosymbolic community.\n3. The evaluation on 13 benchmarks across 5 neurosymbolic tasks show the advantage of the proposed DOLPHIN." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes DOLPHIN, a scalable neurosymbolic learning framework that integrates symbolic reasoning efficiently within deep learning models. Unlike existing frameworks that struggle with CPU-GPU data transfer bottlenecks, DOLPHIN enables symbolic programs to operate as a set of inheritanted PyTorch nn.module, allowing efficient GPU-based differentiation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It seems DOLPHIN only supports neurosymbolic programs with deterministic symbolic processes. For example, if HWF task requires the neural part to predict both numbers and operators (+,-,*,/), the symbolic part cannot be programmed with the Apply function. How DOLPHIN deal with this situation?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see \"weaknesses\" above -- in particular my confusion with the DTKP provenance." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is very well written, and describes the basic execution and batching mechanism clearly, at least for the DAMP provenance. The Dolphin framework does seem to be an improvement over SOTA in terms of basic usability for certain classes of neuro-symbolic programs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce Dolphin, which is a pytorch-friendly framework for performing neuro-symbolic computations. The neural component of a computation is assumed to be a model, such as an MNIST digit classifier, which outputs a discrete set of symbols (e.g the digits 0-9), with a probability attached to each of them. The symbolic component is a python program which runs a computation over the symbols. The result of symbolic execution is a pytorch tensor, representing final output probabilities for each possible result, which is end-to-end differentiable with the neural components. The authors apply Dolphin to several of neuro-symbolic benchmarks, and show that it is faster than competing frameworks.\n\nDolphin essentially works by running the symbolic program for every possible combination of input symbols, and tracking the probability of each combination. The symbolic program is executed on CPU, but Dolphin evaluation will merge different traces of the program which have the same output into batches. The probabilities can then be computed using batch operations that are GPU-friendly, as well as being end-to-end differentiable with pytorch.\n\nThe authors also provide two different mechanisms, which they call provenances, for tracking probabilities. The DAMP provenance tracks all probabilities, while DTKP tracks only the top-K proofs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors spend a lot of time talking about the easy parts of the problem, and fail to adequately discuss the hard parts. As a result, they gloss over two glaring weaknesses that I see with using this approach to solve anything other than cherry-picked trivial problems. \n\nThe first issue is the combinatorics. When evaluating a function f(A,B), where A and B are distributions over symbols, evaluation must evaluate f(a,b) for every possible combination of symbols { a | a \\in A }, and { b | b\\ in B }. Depending on the exact problem, this can easily lead to an combinatorial explosion in the number of possible outputs. The authors test their code on the \"sum of MNIST digits\" problem, where the combinatorics are reasonable; even given 20 digits, there are at most 181 possible answers. If they were to instead try the \"product of MNIST digits\", which is a tiny change to the code, then the number of possible outputs would balloon, and the technique would likely fail. \n\nThe second issue is control flow. As a symbolic computation, the \"sum of digits\" has no loops or branches, and thus is trivially easy to batch. The authors mention that they support recursive computations, but those generally require a branch to terminate the recursion, and often have divergent control flow. In the presence of branches, different traces of the program take different paths, and no longer cleanly batch together. \n\nThe usual solution (e.g. in CUDA programs) is that when evaluation encounters a branch, it splits the batch of traces into a then-branch and an else-branch, and then merges the traces together again afterwards. Without merging, the traces will continue to diverge on subsequent branches, until each trace is operating independently at batch size 1, and the benefits of parallelism are lost. \n\nMerges happen at the join points in a control-flow graph, which requires the underlying library to build a control-flow graph. Alternatively, since there are only two batched operations (conjunction and disjunction), the authors could first construct an (unbatched) DAG of operations, and then merge/batch together independent nodes of the DAG after the fact, in the style of Looks et al. \"Deep learning with dynamic computation graphs,\" or Neubig et al. \"On-the-fly operation batching in dynamic computation graphs.\"\n\nHowever, the authors make no mention of any machinery to analyze control-flow, build control-flow graphs, or otherwise auto-batch in the presence of divergent control flow. In fact, they do not even provide a discussion or examples of how to write recursive computations with their library at all, despite claiming that it is possible. \n\nMy main objection with both of these issues is that the authors simply don't discuss these problems at all, when IMO they are very clearly major limitations that affect the kind of programs that Dolphin is able to run. \n\nA further weakness of the writing itself is that the authors do not do a good job of explaining the DTKP provenance, which seems like it's quite important. I have several criticisms here. First, it is possible that choosing only the top-K proofs after each operation will address the combinatorics issue, which would be a big deal. However, I'm uncertain, because the authors gloss over combinatorics problem altogether without discussion. Second, the authors claim that their mechanism for merging DTKP tags is equivalent to weighted model counting, but this claim is wholly unsubstantiated. I didn't really understand the formula in Table 1 at all, including how infinities get into the tags. At the very least, the authors should provide a detailed discussion of DKTP in the appendix, ideally with a proof of how it relates to WMC, if space within the paper itself is an issue. \n\nFinally, the authors mention that wrt. to the HWF task, \"the DTKP-AM provenance is more effective than DAMP since the tags in DAMP provenance lack the structure needed to capture the semantics of the symbolic program.\" This statement seems important, but really requires further explanation; I don't understand it at all. Providing HWF as a worked example (perhaps in the appendix) would be valuable to anybody who actually wants to use Dolphin. \n\nErrors:\n\nLine 310: \"Its conjuction operation is defined as the addition of probabilities, and its disjunction is defined as the multiplication of probabilities.\" Unless my understanding is way off base, shouldn't this be the other way around? For independent observations, p(A and B) means multiplying p(A) and p(B)? That's what the authors show in Table 1." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- It would be better to provide a breakdown of the training time (e.g., according to Figure 1) to justify the major efficiency improvement of Dolphin.\n- In Figure 5, for small tasks that all competitors could converge within the time limit, why Dolphin has a better accuracy?\n- In Figure 6, there is a trade-off between accuracy and training time for different provenances (DAMP vs. DTKP-AM). How should we select the most suitable one in practice?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- A new framework for neurosymbolic learning, namely Dolphin, is developed.\n- Dolphin shows superior performance compared to existing frameworks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents Dolphin, a brand new framework designed to enhance the scalability of neurosymbolic learning. Dolphin allows developers to write differentiable symbolic programs in Python, utilizing PyTorch for end-to-end GPU acceleration. The framework conveys flexible programmability, end-to-end differentiability, scalability, and tunability, which are essential to handling complex programs and large datasets effectively. Experimental results demonstrate that Dolphin significantly outperforms existing frameworks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My major concern is that the paper is not well structured and hard to follow. In the introduction section, the authors criticized existing frameworks that they must use a separate CPU-based backend and suffer from the slow inter-process data transfers. For me, this is implementation-specific, and it does not drive me to the reasons why we should redesign the entire framework. Although the authors further discuss the challenges in lines 52-67, I find it rather irrelevant to the aforementioned limitation of slow inter-process data transfers.\n\nMoreover, as a new framework, there lacks an overview to depict the layered structures. This prevents readers from having a general picture. It is hard to tell why the designs/implementations could realize the core principles. Additionally, I cannot map the core principles to the challenges discussed in the introduction, either." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Line 213: “Tags are tensors that represent relative likelihoods”. Relative with respect to what? Relative likelihoods (as I understand it) represent a fraction between two likelihoods.\n- The experimental section doesn’t really explain how the CLUTRR and Mugen tasks are solved by Dolphin. E.g. what are the Dolphin programs used here? I think it could be useful to at least include these in the Appendix.\n- I found the naming of “Distribution” unclear. Unless I misunderstand it, a “Distribution” isn’t a probability distribution? (E.g. the Filter operation can remove probability mass.)\n- How did you perform hyperparameter tuning? Did you tune the baselines in a similar fashion? Given that Table 2 compares total training time, better hyperparameters also affect the reported runtime.\n- Why is there such a pronounced accuracy difference between Dolphin and Scallop in some experiments? From what I understand, the provenances like DAMP are essentially inherited from Scallop, so similar accuracy in Scallop should be possible (although not with the same runtime of course).\n\nMinor comments:\n\n- The paper mentions that an “NVIDIA GeForce RTX 4352” was used for the experiments for all experiments (besides CLUTRR). Is this a typo? I’m not aware of the existence of this specific model, and could not find anything about it on the internet. In contrast, the Appendix mentions that a GeForce RTX 2080 Ti was used.\n- For Table 2, what is the unit of time? I assume this is in seconds, but I couldn’t find this anywhere.\n- For Table 2, what is the provenance used for Dolphin? I assume this is DTKP-AM, but I couldn’t find this anywhere.\n- Line 518: “these methods are point solutions”. What do you mean by \"point solution\"?\n- The brackets on the citation on line 107 are wrong.\n - Figure 6 bottom would be more clear with a log y-scale.\n- Several citations refer to the arXiv preprint instead of the conference publication (e.g. for neurASP, CLUTRR, and NeuralLog)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The problem tackled in the paper (scalable and efficient neurosymbolic learning) is relevant and an important issue for the broader neurosymbolic community. \n- Overall, I feel that the paper is rather well-written and includes several useful figures and examples, resulting in a clear presentation of the ideas. \n- The design of Dolphin seems more accessible to a deep learning audience compared to many existing neurosymbolic systems. Notably, it does not require knowledge of e.g. ASP or Prolog (as is the case for NeurASP or DeepProbLog) and can be integrated easily into a deep learning library such as PyTorch or Tensorflow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Many neurosymbolic frameworks have been introduced in recent years, which typically execute their symbolic component on the CPU. This limits their scalability, due to the inferior CPU performance and data transfer latency between the CPU and GPU. The paper introduces Dolphin, a neurosymbolic framework that is fully implemented by parallel tensor operations, and hence can be run on the GPU using conventional deep learning libraries such as PyTorch. The experiments indicate that Dolphin exhibits considerable speed-ups compared to existing frameworks such as Scallop." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Novelty**. The key contribution of the paper - speeding up a neurosymbolic framework by tensorizing it and running on GPUs - is certainly not a new idea. One example of prior work is the LYRICS framework by Marra et al. (2019), which also uses tensor operations to perform the symbolic operations in parallel on the GPU (see e.g. Figure 1 in their paper). Logic tensor networks (Badreddine et al., 2022) and Tensorlog (Cohen, 2020) are some additional examples. These frameworks often also support different provenances / semirings / t-norms. The parallelization of neurosymbolic methods with expressive probabilistic semantics is more challenging, but also here there is plenty of existing work (see e.g. Darwiche (2020) or Dang et al. (2021)). Unfortunately, the paper does not mention prior work on parallelized neurosymbolic learning, nor how it is different from these existing methods.\n\n**Semantics**. It is not clear to me what exact semantics Dolphin aims to achieve. The first provenance (DAMP) is essentially fuzzy semantics (which already has been shown to be easily parallelizable, e.g. Badreddine et al. (2022)). On the other hand, “Apply” mostly brute-force enumerates models meaning the necessary independence assumptions for probabilistic sematics can often hold. (c.f. the MNIST-experiment). The second provenance is the top-k semiring, which is less trivial to parallelize. However, the proposed solution of adding the different proofs destroys the top-k semantics (lower-bounding the WMC). This also results in the introduction of clamp operations, which could lead to zero gradients.\n\n**Language**. A distinction from existing methods is that Dolphin introduces its own set of programming primitives (apply, filter, etc.). Previous neurosymbolic frameworks have typically built on an existing language, e.g. Datalog for Scallop, ASP for NeurASP, ProbLog for DeepProbLog, etc. However, there is no justification for the choice of programming primitives. How does its expressivity relate to existing systems such as Scallop? Why wasn’t an existing language chosen? In my opinion, a lot of different choices could have been made (e.g. why do you need ApplyIf instead of just Apply + Filter?).\n\n**Experiments**. I was surprised that the IndeCateR baseline achieved such low accuracy, given that the experiment seems to be the same as in the IndeCater paper, where the reported results are much better. I just tried out the original IndeCateR implementation myself, and I could replicate the MNIST-addition (L) on my machine in 2 minutes. In contrast, the paper reports a timeout after 10 hours. The accuracy also reaches 86.8%, as opposed to less than 10% in the paper (I'm not sure how the paper reports accuracy if it times out). As the code for the baselines is not included in the supplementary material, I hope the authors can clarify these discrepancies. There are additional issues in the experimental section, e.g. there is no mention of hyperparameter tuning, c.f. the questions section. \n\nLastly, the performance of Dolphin is claimed to be state-of-the-art but I’ve seen several systems get better results on the considered benchmarks (the comparison is hard as actual numbers are not reported, and only bars). To give just some examples, Orvieto et al. (2023) report 94% for Pathfinder, and Manhaeve et al. (2021) report near-perfect accuracies for CLUTRR. I want to stress that I don’t think state-of-the-art results are necessary, but if they are claimed this should be properly supported.\n\nIn summary, the concerns about the novelty of the paper combined with the experimental evaluation issues unfortunately mean I cannot recommend acceptance.\n\n\n**References**\n\nBadreddine, S., Garcez, A. D. A., Serafini, L., & Spranger, M. (2022). Logic tensor networks. *Artificial Intelligence*.\n\nCohen, W., Yang, F., & Mazaitis, K. R. (2020). Tensorlog: A probabilistic database implemented using deep-learning infrastructure. *Journal of Artificial Intelligence Research*, *67*, 285-325.\n\nDang, M., Khosravi, P., Liang, Y., Vergari, A., & Van den Broeck, G. (2021). Juice: A julia \npackage for logic and probabilistic circuits. In *Proceedings of the AAAI Conference on Artificial Intelligence*.\n\nDarwiche, A. (2020). An advance on variable elimination with applications to tensor-based computation. In *ECAI.*\n\nManhaeve, R., Dumančić, S., Kimmig, A., Demeester, T., & De Raedt, L. (2021). Neural probabilistic logic programming in DeepProbLog. *Artificial Intelligence*, *298*, 103504.\n\nMarra, G., Giannini, F., Diligenti, M., & Gori, M. (2019). Lyrics: A general interface layer to integrate logic inference and deep learning. In *Joint European Conference on Machine Learning and Knowledge Discovery in Databases*.\n\nOrvieto, A., Smith, S. L., Gu, A., Fernando, A., Gulcehre, C., Pascanu, R., & De, S. (2023, July). Resurrecting recurrent neural networks for long sequences. In *International Conference on Machine Learning* (pp. 26670-26698). PMLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dolphin,\ntitle={Dolphin: A Programmable Framework for Scalable Neurosymbolic Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3NFtzhFbYM},\nnote={under review}\n}" }, "abstract": { "value": "Neurosymbolic learning has emerged as a promising paradigm to incorporate\nsymbolic reasoning into deep learning models.\nHowever, existing frameworks are limited in scalability with respect to both\nthe training data and the complexity of symbolic programs.\nWe propose Dolphin, a framework to scale neurosymbolic learning at a fundamental level by mapping both forward chaining and backward gradient propagation in symbolic programs \nto vectorized computations.\nFor this purpose, Dolphin introduces a set of abstractions and primitives \nbuilt directly on top of a high-performance deep learning framework like \nPyTorch, effectively enabling symbolic programs to be written as PyTorch modules.\nIt thereby enables neurosymbolic programs to be written in a language like Python that is familiar to developers and compile them to computation graphs that are amenable to end-to-end differentiation on GPUs.\nWe evaluate Dolphin on a suite of 13 benchmarks across 5 neurosymbolic tasks that combine deep learning models for\ntext, image, or video processing with symbolic programs that involve multi-hop \nreasoning, recursion, and even black-box functions like Python `eval()`.\nDolphin only takes 0.33% -- 37.17% of the time (and 2.77% on average) to train these models on the largest input per task compared to baselines Scallop, ISED, and IndeCateR+, which time out on most of these inputs.\nModels written in Dolphin also achieve state-of-the-art accuracies even on the largest benchmarks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "neurosymbolic learning", "scalability", "vectorization", "differentiable reasoning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/cf4c62a90d33ee275022e0526bfad31afdcf52f4.pdf" }, "presentation": null, "primary_area": { "value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/4e67292b6302dea2fa54ea29e0429646ae1d865d.zip" }, "title": { "value": "Dolphin: A Programmable Framework for Scalable Neurosymbolic Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Ofy2jNsNL
ACT-IN-LLM: Adaptively Compression Vision Tokens in LLM for High-Resolution Multimodal Large Language Models
main
Active
Multimodal Large Language Models; High-resolution; Efficiency
applications to computer vision, audio, language, and other modalities
5;5;6;8
3;4;4;5
3;3;3;3
2;2;3;3
3;3;2;4
6
4
3
2.5
3
0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See \"Weakness\" 1." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Originality:\n- The paper introduces a novel in-layer compression approach, departing from conventional pre-LLM compression methods. This represents a fundamental shift in how vision tokens are handled in MLLMs.\n- The adaptive compression strategy that operates within different LLM layers is an innovative solution to the high-resolution image processing challenge.\n\n2. Technical Quality:\n- The work provides solid theoretical foundations by analyzing token compression through the lens of low-rank approximation in self-attention mechanisms.\n- The authors conduct detailed empirical studies to demonstrate why early-layer compression is suboptimal, supporting their approach with concrete evidence.\n- The technical approach is well-motivated through empirical observations about token importance varying across layers.\n\n3. Solid Experiments:\n- The method achieves substantial practical improvements, reducing training/inference time by 20% and vision tokens by 60% while maintaining competitive performance.\n- The 6.2% performance improvement over existing compression techniques represents a significant advancement in high-resolution image processing for MLLMs.\n- The experimental validation is comprehensive, spanning multiple model sizes (0.5B to 7B parameters) and various benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of processing high-resolution images in multimodal large language models (MLLMs) by introducing ACT-IN-LLM, a novel adaptive compression strategy for vision tokens. Unlike existing pre-LLM compression methods that reduce tokens before LLM processing, ACT-IN-LLM performs compression within different LLM layers through an adaptive compression module (ACM). The method selectively compresses key and value tokens in the self-attention mechanism while retaining all tokens across layers, guided by each layer's final token that encodes the complete multimodal context. The authors provide theoretical analysis demonstrating that their key-value compression approach achieves better low-rank approximation compared to existing compression techniques. Experimental results across various LLM sizes (0.5B to 7B parameters) show that ACT-IN-LLM achieves a 6.2% improvement over existing compression methods while maintaining competitive performance with non-compression models, reducing training/inference time by approximately 20% and vision tokens by 60%." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Methodological Clarity and Analysis:\n- The paper lacks clear explanation of how attention weights across multiple heads are handled in their analysis. This is crucial for understanding their token importance assessment methodology, as different aggregation methods (averaging across heads, selecting specific heads, or analyzing heads separately) could lead to different conclusions about token importance.\n- The analysis of attention weight distributions between different types of tokens (vision-to-vision, vision-to-text, text-to-vision) is missing, which could provide deeper insights into the token compression mechanism.\n- The authors could strengthen their analysis by including entropy measurements of attention weights for different tokens, which would provide quantitative support for their token selection strategy.\n\n2. Technical Presentation:\n- The paper introduces concepts like \"high-resolution\" and \"low-resolution\" tokens without first establishing the context of LLaVA's AnyRes visual encoding scheme. This may create confusion for readers not familiar with the underlying visual encoding mechanisms in multimodal LLMs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. As you are progressively shrinking the ratio $r_i$, $r_j$ and $r_p$, how do you determine where and how much should you shrink?\n\n2. Why do you use the attention map from the previous layer to guide vision token compression instead of the current layer's attention map?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy to follow, offering a clear mathematical proof to demonstrate the theoretical effectiveness of the proposed method.\n\n2. The proposed approach is both intuitively and mathematically sound.\n\n3. The experimental results are thorough, complemented by a comprehensive ablation study." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel Adaptive Compression Module (ACM) designed to dynamically reduce the number of high-resolution image tokens for key/value during the forward pass. The ACM leverages attention maps to identify and retain the most relevant high-resolution tokens, preserving only the top k tokens for key/value. Experimental results demonstrate the efficiency and effectiveness of this approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The evaluation benchmark is somewhat limited. Adding more benchmarks, such as visual grounding benchmarks and other non-text-related high-resolution benchmarks like V* Bench, would facilitate more comprehensive evaluations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Based on the results presented, all query tokens are retained, while key and value tokens are compressed in the self-attention mechanism. If the value tokens are compressed, the number of tokens outputted from the attention block should match the number of compressed value tokens. Then why are all image tokens still preserved when entering the final LM head? Please explain this in detail.\n2. Please provide a detailed explanation of how your strategy reduces computational load across various components." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper provides a valuable perspective on why early token deletion should be avoided.\n2. The writing is generally smooth, and the presentation of figures and tables is visually appealing, contributing positively to the overall presentation of the paper.\n3. The motivation section reads coherently and introduces the problem that needs to be addressed in a natural manner." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The proposed ACT-IN-LLM method improves Multimodal Large Language Models (MLLMs) by adaptively compressing vision tokens within different layers, unlike traditional methods that reduce tokens before reaching the LLM. This approach preserves all tokens throughout layers, selectively compressing only key and value tokens in self-attention to maintain critical information while reducing computational load." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper dedicates significant space to comparisons with traditional methods that compress tokens before using LLMs. However, it overlooks an essential baseline, FastV [1], which also compresses image tokens within the LLM itself and allows for direct comparison of training results. This omission makes the paper less convincing.\n2. The paper claims \"reducing training/inference time,\" but does not provide any data demonstrating training time reduction.\n3. The proposed strategy appears usable without training; therefore, it would be beneficial to include results showing inference acceleration without additional training.\n4. The existence of Table 3 is quite awkward: first, there are numerous gaps in the table, and second, the training data is entirely different, making these models incomparable.\n[1]An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models, https://arxiv.org/abs/2403.06764" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Thanks for the authors' valuable exploration in this area. I have several concerns, and if these can be addressed, I would like to raise my rating.\n\n1. The reported compression rate seems relatively low (the proposed method achieves 83% of the full model's performance and 94% of its memory usage according to Table 2). Would it be possible for the authors to provide results with a higher compression ratio (around 60%, for example) to more effectively demonstrate the advantages of the proposed method?\n2. It appears that FastV[1] is not included in the comparisons. Could the authors consider providing comparisons with FastV, particularly at higher compression ratios (such as around 60%)?\n3. The current reliance on text-guided token selection may have limitations. Have the authors considered incorporating the complexity of visual information into the compression strategy?\n\n[1] https://arxiv.org/pdf/2403.06764" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "By constructing a unified formulation of compression methods and analyzing low-rank approximations, the paper provides a robust theoretical framework supporting its method. Experimental results convincingly demonstrate that ACT-IN-LLM outperforms existing pre-LLM compression and interaction-based methods, achieving competitive accuracy while significantly reducing token count and computational costs. Notably, the proposed method shows good scalability, with consistent gains observed across larger model sizes and datasets. These strengths suggest that ACT-IN-LLM offers a practical, efficient solution for high-resolution MLLM applications, and the work is both well-structured and empirically solid." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper examines the limitations of token compression strategies in MLLMs when processing high-resolution visual inputs. It presents ACT-IN-LLM, an approach designed to address these limitations by adaptively compressing visual tokens across LLM layers, contrasting with existing methods that apply compression before token input to the LLM. The authors claim this layer-wise, interaction-guided compression effectively preserves essential visual information, improving model accuracy while reducing computational load. Experiments indicate significant performance gains over prior compression strategies and competitive results with non-compression methods, highlighting ACT-IN-LLM’s potential to enhance high-resolution MLLM efficiency and scalability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Relying solely on text-guided token selection may limit the model's adaptability, as it could overlook the inherent complexity of visual information itself. Without considering factors like scene detail or object density, the compression strategy might miss important visual nuances, potentially affecting performance across diverse tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024actinllm,\ntitle={{ACT}-{IN}-{LLM}: Adaptively Compression Vision Tokens in {LLM} for High-Resolution Multimodal Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Ofy2jNsNL},\nnote={under review}\n}" }, "abstract": { "value": "High-resolution inputs empower Multimodal Large Language Models (MLLMs) to capture intricate visual details, thereby enhancing comprehension. However, the self-attention mechanism’s quadratic complexity poses significant computational and memory challenges as image resolution increases, particularly with long-vision tokens. Existing approaches generally alleviate these issues by reducing vision tokens before feeding them into LLMs. Although efficient, this Pre-LLM compression strategy fails to match the performance of models utilizing all tokens, particularly on high-resolution benchmarks. Our experiments reveal that the performance gap arises from this strategy’s limitation in selecting important visual tokens in early LLM layers, leading to the irretrievable loss of critical information. To overcome these challenges, we propose a new strategy that Adaptively Compresses vision Tokens within different LLM layers, named ACT-IN-LLM. Our innovative approach retains all tokens throughout the layers to ensure no vital information is lost while compressing key and value tokens in the self-attention mechanism, to reduce computational costs. The layer-wise compression of ACT-IN-LLM is guided by the interaction information between vision and text tokens, leading to more accurate selections. Our theoretical analysis and extensive experiments demonstrate the effectiveness of ACT-IN-LLM, showing a 6.3% improvement over existing token compression techniques. It also achieves the competitive performance with non-compression methods, while reducing training/inference time by ∼ 20% and vision tokens by ∼ 60%." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Models; High-resolution; Efficiency" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/078f5730c544160d26a27af1352e3a5063d39057.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ACT-IN-LLM: Adaptively Compression Vision Tokens in LLM for High-Resolution Multimodal Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Oli4u6q3p
RelitLRM: Generative Relightable Radiance for Large Reconstruction Models
main
Active
Relightable reconstruction;Inverse Rendering;Generative Relighting
applications to computer vision, audio, language, and other modalities
3;6;6
5;3;3
2;3;3
2;2;3
3;3;3
5
3.666667
2.666667
2.333333
3
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* In the tables with the numbers for metrics please highlight the best numbers (bold)\n* What hardware was used for training the model? Training time, memory requirements. Add to A.4\n* In theory the method should work for objects that traditionally have challenging reflectance properties such as hair or fur. I am not sure if hair and fur were part of the training dataset, but it still might be interesting to see if it works." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Pseudocode in A.1clarifies all misunderstandings from the text descriptions. I can not stress enough how much I like this way of communicating the pipeline.\n* The approach uses a diffusion process instead of a multi-view optimization, which is exceptionally fast in direct comparison.\n* Trained on a rich dataset with a huge variety of lighting conditions.\n* Light representation (input) is a simple environment map. Changing lighting is therefore very simple (no complicated neural light representation)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method to generate a relighable 3D representation using 3DGS for the geometry and a diffusion based model to get the illumination dependent appearance parameters for those Gaussians.\nThe geometry is predicted in form of per pixel Gaussians from the sparse input views (4 - 16) in a single Transformer forward step. The tokens of the geometric representation is concatenated with HDR features extracted from the illumination given as environment map and the noise target views. After denoising the tokens for everything except the input gaussians are discarded. The remaining tokens are decoded into the appearance (SH) of the Gaussians. The Gaussians are then rasterized into any novel view. The diffusion model is trained such that the lighting of the rendered Gaussians should match the lighting of the input environment map. During inference this environment map and the target view camera pose can be arbitrarily be chosen, thus a scene can be reconstructed from sparse views and then relit." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Knowing only even a fraction of the research and work that has gone into disentangling the ambiguity between lighting and shading it rubs me the wrong way to read something that suggests it solved it without actually addressing the core problem. The method does not really decompose a scene into geometry, lighting and shading and is not usable if the use case would require extracting or editing reflectance properties. The way I see it this paper does relighting by decomposing a scene into geometry and appearance, however this is very different to what methods which explicitly extract reflectance and illumination do. The problem statement is profoundly different if you have to produce a explicit, reflectance representation which represents some underlying physical property of the surface, compared to just estimating the product of light and reflectance. I don't think much has to be changed to show respect for this difference: In 2.1 it should be mentioned that the method models the appearance under arbitrary illumination without an explicit reflectance representation. In the introduction the claim that the ambiguity between shading and lighting, is overcome should be phrased more carefully or be clarified. As far as I understand this paper estimates appearance as the integrated product between the unknown shading and a given lighting in form of a view dependent SH. This is really great work, but should not be confused with reflectance decomposition." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See \"weakness\"." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- originality-wise: I appreciate the proposed end-to-end transformer-based architecture for relightable 3D object reconstruction. \n- quality-wise: when taking into the efficiency of the proposed approach, I think quantitatively the performance is good.\n- clarity-wise: the paper is well-written and easy to follow.\n- significance-wise: the task of relightable 3D object reconstruction is vital for many downstream applications, e.g., AR/VR and robotics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the task of reconstructing relightable 3D objects from sparse images. For this, the authors extend the idea of GS-LRM to incorporate the generation of a relightable appearance. Specifically, they feed the geometry feature from GS-LRM through a diffusion process based on the transformer architecture. The transformer attends to the target illumination and outputs appearance tokens. The combination of the appearance tokens from the newly added transformer and the geometry tokens from the original GS-LRM transformer concludes the generation of Gaussian Splats. Experiments on the Stanford-ORB and Objects-with-Lighting real-world benchmark as well as the TensoIR synthetic benchmark demonstrate the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## Concerns about the architecture\n\nI think the authors missed an important question to answer: why do we choose the current architecture? Essentially, the mechanism of discarding many tokens (L231) is wasting the model's capability. \n\nThe root question is why do we need to use self-attention instead of cross-attention? Why do the environment maps and the denoised images need to be treated the same as the appearance tokens, especially since only the appearance tokens will be used for the Gaussian Splats rendering?\n\n## Not enough understanding of the current model\n\nEven though with the current architecture, I do not think the authors provide enough analysis. Specifically, have the authors visualized the attention maps of the transformer? What does the transformer learn? Does it attend to some specific areas in the environment map that cause the specular effects in the rendering? How does the transformer attend to those denoised images?\n\n## Concerns about the qualitative results\n\nIn Fig. 3.(a), the produced Pepsi can's color is quite different from the ground truth. A similar thing happens to the gnome's colors. Further, the characters on the Pepsi can are quite blurry compared to NVDiffRec-MC / ground-truth. Additionally, in Fig. 3.(c), the RelitLRM produces quite different shows from the ground truth. However, the shadows are correctly predicted by both InvRender and TensoIR. \n\nWhy is this the case? Have the authors carefully studied the causes? Whether increasing the number of views will help? Will 16 views mitigate these issues as the authors state that \"performance saturates around 16 views\" (L525)? If even 16 views cannot resolve the issue, what are the intrinsic shortcomings of the proposed model?\n\nI hope the authors can provide a systematic analysis for a good understanding of the model.\n\n## Missed important baselines\n\nI think the authors missed several quite related as well as important baselines, e.g., [a, b]. They both use the idea of diffusion-based relighting model to tackle the task of relightable reconstruction.\n\nEspecially IllumiNeRF [b], which directly tackles the task of relightable object reconstruction and competes on the benchmark of Stanford-ORB and TensoIR. Frankly speaking, [b] outperforms the proposed approach on both benchmarks quantitatively:\n- PSNR-H / PSNR-L / SSIM / LPIPS: 24.67 / 31.52 / 0.969 / 0.032 (RelitLRM) vs 25.42 / 32.62 / 0.976 / 0.027 ([b] on Stanford-ORB)\n- PSNR / SSIM / LPIPS: 29.05 / 0.936 / 0.082 (RelitLRM)) vs 29.709 / 0.947 / 0.072 ([b] on TensoIR)\n\n[a] A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis. EGSR 2024.\n\n[b] IllumiNeRF: 3D Relighting without Inverse Rendering. ArXiv 2024.\n\n## Concerns about the novelty claim\n\nOn L088, the authors claim the first contribution as \"a regression-based geometry reconstructor followed by a diffusion-based appearance synthesizer\". Though I appreciate the end-to-end transformer architecture, I may not be convinced that the idea is entirely novel since the above-mentioned IllumiNeRF has already proposed an almost exact idea. I would recommend modifying it to correctly position the work.\n\n## Missed training details\n\nWhat is the hardware setup required to train the model? How long does it take to complete the training, hours or days?\n\n## Incorrect description about the benchmark\n\nIn L356, the authors state that the Stanford-ORB benchmark has \"60 training views and 10 test views per lighting setup\". This is not true. Please correct it.\n\n## Typos\n\nL290: \"is involves\" -> \"involves\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In the paper, the authors provide the rendering performance. However, I cannot find the training time. Please provide more specifics on the training setup, such as the number of GPUs used and total training hours. \n- A more thorough analyais and discussion of why performance plateaus between 8 to 16 views would enhance the paper's quality.\n- Please provide quantitative evaluations of the extracted geometries.\n- In the object insertion image (Figure 1(c)), how is illumination in the scene accounted for? Did you sample on the position of the object to capture the surrounding environment and incorporate the environment map into your model? Additionally, how do you account for the indirect light from another object produced by your RelitLRM in the scene?\n- The method still takes 2 to 3 seconds to render. In contrast, the geometry and textures obtained from other methods can be rendered using many real-time rendering techniques. Moreover, in current industrial applications, it is challenging to abandon triangle meshes and textures. Therefore, this method cannot be considered a viable solution for 3D digital assets. However, if this approach could be extended to scene relighting, its applicability could be significantly broadened.\n- Is the number of Gaussians predicted by the model sufficient for capturing different topologies and structures across diverse objects?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The method models geometry and rendering independently, akin to NeRF’s modeling architecture. It first constructs geometry from multi-view images, then generates the rendering by combining geometry features with environmental maps.\n- The method is practical as it only requires 4-8 views as input. It is able to effectively reconstruct relightable objects without per-scene optimization.\n- On both synthetic and real-world datasets, the method offers competitive relighting results, requiring much fewer input views than other methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method trained end-to-end on synthetic multi-view renderings of objects under varied, known illuminations. The approach is able to generate high-quality Gaussian splatting representations of 3D objects under novel illuminations from sparse input images (4-8 views) captured in unknown, static lighting conditions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In the paper, the authors claim that performance plateaus at around 16 views. However, as shown in Table 5, there is only a marginal improvement in image quality on the TensoIR-Synthetic dataset when increasing from 8 views to 16 views. \n- The novelty of the approach needs clearer articulation. The authors state that their method differs from Neural Gaffer (Jin et al., 2024) by not being limited to single-view inputs. However, this advantage seems to stem from the use of GS-LRM (Zhang et al., 2024a). It is important to clarify how their application of diffusion for relighting distinguishes their method from existing techniques.\n- This method separates geometry from rendering, but the paper does not show results of the decomposed geometry. It is unclear how good or bad the quality of the reconstructed geometries is." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose \\emph{\\method}, a Large Reconstruction Model (LRM) for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations from sparse (4-8) posed images captured under unknown static lighting." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024relitlrm,\ntitle={Relit{LRM}: Generative Relightable Radiance for Large Reconstruction Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Oli4u6q3p},\nnote={under review}\n}" }, "abstract": { "value": "We propose RelitLRM, a Large Reconstruction Model (LRM) for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations from sparse (4-8) posed images captured under unknown static lighting. Unlike prior inverse rendering methods requiring dense captures and slow optimization, often causing artifacts like incorrect highlights or shadow baking, RelitLRM adopts a feed-forward transformer-based model with a novel combination of a geometry reconstructor and a relightable appearance generator based on diffusion. The model is trained end-to-end on synthetic multi-view renderings of objects under varying known illuminations. This architecture design enables to effectively decompose geometry and appearance, resolve the ambiguity between material and lighting, and capture the multi-modal distribution of shadows and specularity in the relit appearance. We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines while being significantly faster. Our project page is available at: https://relitlrm.github.io/." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Relightable reconstruction", "Inverse Rendering", "Generative Relighting" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8a5a5db26b59913c565de14b69879b782caab1c1.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "RelitLRM: Generative Relightable Radiance for Large Reconstruction Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3OyaXFQuDl
Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling
main
Active
large and small language models;reasoning;math;compute-optimal;sampling;supervised finetuning
foundation or frontier models, including LLMs
5;6;6;6
3;4;3;3
3;3;3;3
2;2;3;3
3;3;3;2
5.75
3.25
3
2.5
2.75
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I will try and cluster my questions in sensible groups.\n\n1. _Theoretical Understanding_:\n - Can you provide theoretical insights into when WC sampling should outperform SE sampling?\n - How does the optimal sampling ratio change with model size and task complexity?\n - What are the key factors that determine the success of weak-to-strong improvement?\n\n2. _Methodology_:\n - How would the results change with more sophisticated filtering strategies?\n - Could you provide more details about the specific prompting strategies used?\n - How sensitive are the results to the choice of temperature and sampling parameters?\n\n3. _Generalisation_:\n - What characteristics of a task make it more/less suitable for WC sampling?\n - How would the results scale to even larger model sizes?\n - What is the relationship between FPR and final model performance?\n\n4. _Practical Implementation_:\n - How would you recommend implementing this in scenarios without ground truth?\n - What modifications would be needed for different domains or tasks?\n - Could you provide more detailed guidance on optimal sampling strategies for different scenarios?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. _Originality_:\n - Introduces a novel compute-matched sampling framework with clear mathematical foundations\n - Proposes a new \"weak-to-strong improvement\" training paradigm that challenges conventional wisdom\n - Provides a fresh perspective on the compute-quality trade-off in synthetic data generation\n\n2. _Experimental Rigour_:\n - Comprehensive evaluation across multiple dimensions:\n * Multiple model pairs (both open and closed models)\n * Various compute budgets and training paradigms\n * Different dataset sizes and difficulty levels\n - Thorough ablation studies that isolate the impact of coverage and diversity\n - Both human and automated evaluation of false positive rates\n - Clear validation of results through transfer learning (Functional MATH)\n\n3. _Practical Impact_:\n - Demonstrates significant cost savings potential (0.15x cost for comparable or better performance)\n - Shows consistent improvements across model sizes (7B to 27B)\n - Provides actionable insights for practitioners\n - Results particularly relevant given the trend of improving smaller models\n\n4. _Technical Depth_:\n - Rigorous mathematical formulation of compute-matching\n - Analysis of traed-offs between coverage, diversity, and error rates\n - Ablation studies support main claims\n - Clear empirical validation of theoretical framework" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper challenges the common practice of using strong but expensive (SE) language models to generate synthetic training data, proposing instead that using weaker but cheaper (WC) models may be more compute-optimal. The authors introduce a \"compute-matched sampling\" framework that enables fair comparison between WC and SE models by accounting for their relative compute costs. At a fixed compute budget, this framework shows that one can generate P_SE/P_WC more samples from a WC model than an SE model. The authors evaluate this approach across multiple model pairs (Gemma2 9B/27B and Gemini Flash/Pro), tasks (primarily mathematical reasoning), and training paradigms (knowledge distillation, self-improvement, and a novel \"weak-to-strong improvement\"). They assess the generated data along three key dimensions: coverage (problems solved), diversity (unique solutions per problem), and false positive rate (correct answers with incorrect reasoning). The results consistently show that training with WC-generated data outperforms SE-generated data when properly compute-matched." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. _Theoretical Foundation_:\n - Lacks formal analysis of when WC sampling should outperform SE sampling\n - No theoretical bounds on the optimal sampling ratio\n - Missing analysis of the relationship between model size and optimal sampling strategy\n - Limited exploration of failure modes and their characteristics\n\n2. _Methodology Limitations_:\n - Heavy reliance on ground truth for filtering solutions\n - Limited exploration of alternative filtering strategies\n - FPR evaluation methodology could be more robust (50 human samples probably insufficient)\n - Some key implementation details relegated to appendices\n\n3. _Generalisation Concerns_:\n - Primary focus on mathematical reasoning tasks\n - Limited exploration of other domains (coding results show context-dependency)\n - Unclear scalability to larger model sizes\n - Performance on more complex reasoning tasks not fully explored\n\n4. _Practical Considerations_:\n - Deployment challenges in scenarios without ground truth not fully addressed\n - Resource optimisation strategies could be explored more\n - Limited discussion of integration with existing training pipelines\n - Cost-benefit analysis could be more comprehensive across different scenarios" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Although both low and high budgets are studied, could you please provide the results of an extremely high budget where the cost is not an important factor? This should be indicative of diverse data scales.\n2. Despite the train-test splits of MBPP, this paper only trains models on MBPP and tests them on HumanEval. The testing results on MBPP are expected to be provided for a more comprehensive understanding.\n3. Writing:\n- 1. All the ref links are invalid.\n 2. l101, l104: \"i.e.\" should be \"i.e.\".\n 3. l103: grammar error for \"we supervise finetune\".\n 4. l109: use \\citet{} for \"(Zelikman et al., 2024;Singh et al., 2023).\"." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The research question is significant, focusing on performance comparison of data sampled from WC and SE models, respectively.\n2. The findings are impressive, challenging the traditional belief that data from a strong model is better for finetuning models.\n3. The evaluation settings are diverse, demonstrating the effectiveness and robustness of this method, despite only the Gemma series models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper revisits the trade-offs between generating synthetic data using a stronger but more expensive (SE) model versus a weaker but cheaper (WC) model, and finds that at a fixed sampling compute budget, finetuning LMs with data from a WC model can consistently outperform data from a SE model in multiple settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper centers exclusively on the Gemma series, and it is essential to extend the analysis to the Llama series to demonstrate the robustness of the conclusions.\n2. While the paper aims to highlight the lower computational cost of the WC model for data synthesis (particularly important for large-scale data generation), all the experiments are conducted on relatively small datasets. This discrepancy undermines the overall contribution of the paper.\n3. Compared to the SE model, the WC model can be regarded as a more diverse yet lower-quality variant. Therefore, it is crucial to compare it with techniques designed to enhance output diversity. Specifically, if adjusting the sampling temperature of the SE model consistently results in performance degradation relative to the WC model, this suggests that the WC model provides a superior quality-diversity trade-off compared to merely increasing the sampling temperature." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "It seems that the difference in data quality between the WC and SE models becomes larger at lower budgets. Is it possible that the WC and SE models generate data of similar quality when the budget is very high?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-structured and clearly written, making the methodology and results easy to follow.\n\nThe experiments are well-executed and provide convincing evidence of the benefits of the proposed approach.\n\nIt addresses a critical issue in synthetic data generation, offering a valuable contribution to this area of research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the novel observation that generating synthetic data using a weaker but cheaper (WC) model is more effective than using a stronger but more expensive (SE) model. The authors demonstrate that, under the same budget, data generated by WC models tend to have higher coverage and diversity, though with a corresponding increase in false positive rates. Additionally, they show that models fine-tuned on WC-generated data consistently outperform those trained on SE-generated data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The conclusion may not hold when using models from different companies. Based on my experience, under the same budget, data generated by a larger model like Qwen2.5 7B could outperform that of a smaller one like Gemma2 2B.\n\nThe paper could benefit from experimenting with more complex reasoning tasks, such as tree search algorithms, and using a reward model to evaluate the quality of the generated data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Your distillation setup is limited to finetuning. One setup where it would be more realistic to not have enough budget is pretraining. Do you have any results on this? I of course do not expect to pretrain a network until convergence during the rebuttal, but it would already be helpful if you could show the first couple of iterations just to make sure the worse data (higher FPR) does not seem to converge to a much worse model.\n* I'd be interested in sample-matched figures. The figures where I'd be most interested in a sample-matched comparison are Figures 4c and 5c. This would allow finding out if a small model can successfully improve a larger model, which would challenge beliefs in the field.\n* Just to go sure: In the self-improvement setups, you keep training a model iteratively on its own generations from the current parameters? Or do you mean that you finetune a \"fresh\" 7B model using an already converged 7B model?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper investigates not only knowledge distillation, but also a self-improvement setup\n* In Figures 4 and 5, it is interesting that training on model-generated solution paths (one per question; at least in the \"27B low compute\" setup) gives a better performance than training on human-provided solution paths (also one per question)\n* Figure 7 carries an interesting finding: Despite training on the data generated by the small model which has more errors, the ultimate trained model does not have more errors in its reasoning. This implies that the additional data mitigates its lower quality, which might add evidence to the discourse beyond the setup studied in this paper.\n* The writing and flow of experiments is mostly clear" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates whether it is better to (self-)distill from a Gemma-27B LLM or to distill three times more finetuning data from a three times smaller Gemma-9B model. It finds that the three times more data of the smaller model, despite including more errors, leads to a higher performance of the finetuned student model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* In 21 of the 22 Figures, the paper hinges on \"matching the compute\", i.e., being allowed to generate 3 times more data when using the 3 times smaller LM. This confounds two factors of variation, making it hard to interpret the findings. This is the main weakness of the paper. One idea to improve on this weakness would be to test out distilling 1, 3, and 9 samples from the large LLM and 3, 9, and 27 samples from the small LLM (instead of the current 1+10 vs 3+30), so that there are both overlapping settings with a matched number of samples and with a matched compute.\n* In the only figure where the small LLM is compared to the large LLM without this advantage (Figure 20 in the appendix), the large LLM produces better training data. It can be expected that if we use the large LLM to generate enough data until the student model converges, it will make a better distilled model. Thus, the only real application of the proposed method is when we do not have enough budget to produce enough data to converge. For the finetuning setup of the paper, that would amount to not being able to generate data for 8k-12.5k questions. This is a setup with limited applicability in practice. It would increase the contribution (score) of the paper to investigate problems where budget limits are hit more frequently in practice, like pretraining, see also my question below.\n* Relative and absolute increases are reported inconsistently. E.g., in Figure 3b the fact that the proposed small model finds 11 instead of 5 solution paths per question (when it is allowed to generate 3 times more paths in total) is reported as a 125% increase (line 268), whereas the fact that 24% of its solutions paths are wrong compared to 17% of the large model is reported as a 7% increase (line 310). This inconsistency becomes problematic when reporting the increase on percentage numbers (e.g., line 258), where it is unclear whether this is a relative or absolute increase. Keeping the reporting consistent would increase both the presentation and the scientific soundness scores.\n* The paper only evaluates Gemma (/Gemini) models. It would help judge the generalization of the claims (and increase the contribution score) to test it out on at least one other LLM, like a Llama model.\n* The datasets are very limited to two math datasets, limiting the contribution. As above, more datasets would help judge the range of applicability, especially whether it also works on non-math and non-reasoning datasets.\n* The paper does not compare to baselines, despite citing multiple closely related approaches\n* The method still requires annotated data, because the LLM-generated data needs to be filtered out if it does not match the GT. It would increase the applicability of the score (and thus the contribution score) if there would be an ablation without filtering, i.e., answering whether the unfiltered erroneous data from the smaller model can still train a better model.\n\nSmall notes that did not influence my score and don't need to be rebuttled, I just note them to make the camera-ready better:\n* The first paragraph of Section 3 could be shortened; it's message (in Equation 1) is just \"if a model has x times more parameters, it takes x times longer to generate\".\n* typo in line 54, \"filters\"\n* typo in line 103 \"we supervise finetune\"\n* typo in line 151, \"consists\"\n* typo in line 157, \"for training student LM\"\n* typo in line 241, \"that where\"\n* The references exclusively list arxiv versions of the papers, not their actual published versions\n* The reference .bib file should best use double brackets for \"{{OpenAI}}\", \"{{The Llama Team}}\", to prevent the ill formatting in line 483 (\"Team, 2024; Anthropic, 2024; AI, 2024\")" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024smaller,\ntitle={Smaller, Weaker, Yet Better: Training {LLM} Reasoners via Compute-Optimal Sampling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3OyaXFQuDl},\nnote={under review}\n}" }, "abstract": { "value": "Training on high-quality synthetic data from strong language models (LMs) is a common strategy to improve the reasoning performance of LMs. In this work, we revisit whether this strategy is compute-optimal under a fixed inference budget (e.g., FLOPs). To do so, we investigate the trade-offs between generating synthetic data using a stronger but more expensive (SE) model versus a weaker but cheaper (WC) model. We evaluate the generated data across three key metrics: coverage, diversity, and false positive rate, and show that the data from WC models may have higher coverage and diversity, but also exhibit higher false positive rates. We then finetune LMs on data from SE and WC models in different settings: knowledge distillation, self-improvement, and a novel weak-to-strong improvement setup where a weaker LM teaches reasoning to a stronger LM. Our findings reveal that models finetuned on WC-generated data consistently outperform those trained on SE-generated data across multiple benchmarks and multiple choices of WC and SE models. These results challenge the prevailing practice of relying on SE models for synthetic data generation, suggesting that WC may be the compute-optimal approach for training advanced LM reasoners." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "large and small language models", "reasoning", "math", "compute-optimal", "sampling", "supervised finetuning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/af167d3c7bc7f6070d6d302d1551976ede8917bd.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3PDklqqqfN
Multi-Field Adaptive Retrieval
main
Active
information retrieval;hybrid retrievers;structured data
applications to computer vision, audio, language, and other modalities
5;5;6;6;8;8
5;3;4;4;4;4
2;3;3;3;4;3
2;2;3;3;3;3
3;3;3;4;4;3
6.333333
4
3
2.666667
3.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Are the scoring dimensions really orthogonal, enabling summation as the summarization metric?\n\nAren´t there additional baselines and data sets that may be used in the experiments?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper tackles a relevant problem, although the proposal has some limitations discussed below. The paper is well written and easy to understand. Originality is rather limited, since the semi-structured retrieval has been researched for a long time. The significance of the results is also limited because the approach is rather simple, and the experimental evaluation focuses on a recently published benchmark. The paper quality is good to its purposes, despite some adhoc design decisions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is concerned with the retrieval of documents which are multi-field, i.e, composed of multiple attributes such as title, authors and content. It proposes a method called MFAR which combines the use of different scoring methods, both lexical (word-based) and dense (embedding-based), for each field, allowing the model to adaptively predict the importance of each field for a given query. \n\nThe authors conduct experiments on three datasets (product reviews, scientific papers and biomedical studies) that demonstrate that MFAR outperforms existing retrieval methods, achieving state-of-the-art results in structured data information retrieval. The study explores the benefits of the multi-field approach and the hybrid use of scoring methods to improve retrieval accuracy, showing that, instead of choosing between dense or lexical-based scorers alone, one can combine them in a hybrid fashion. An adaptive weighting technique is provided to combine those scores given a query." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It is not clear to me whether the fields may be considered indepent, so that the summation of the field-related scores suffices for determining the overall score. That is, it seems intuitive that there are correlations among the field instances and they may bias the result, as was extensively researched in the information retrieval area.\n\nAnother field-related issue is regarding the process of selecting the fields that will be considered in the whole process.\n\nOverall, the proposal is simple and basically consists of combining scoring functions associated with fields, without considering their correlations and other characteristics that may either characterize the task or explain problems or failures. For instance, although the paper focuses on the information carried by the fields, it seems intuitive to mix the aggregated value of the fields with the remaining text, exploiting eventual information there. \n\nThe experimental result also needs to be improved, as detailed next. First of all, the two experimental hypothesis seem to be too simple, thus quite easy to demonstrate. The advantages of using document structure are expected, in particular considering the additional information given to the models. The expected gains of hybrid approaches are also quite predictable. In both cases, it would be interesting to somehow derive upper bounds on the gains, so that the results go beyond benchmark-based evidence. \n\nOn the other hand, it is also necessary to clear outline the limitations of such hybrid approaches and how they may be addressed. I really missed confidence intervals or similar statistical significance metrics on the results. For instance, on Table 2, the results may be too close and, despite the bold highlight, it is not clear how far the results are from the remaining of the baselines.\n\nOne terminology issue is regarding the difference between structured and semi-structured and where the paper fits. It seems to me that semi-structured is the proper jargon." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How much robust is the framework to variations in the number of fields, e.g., regarding field information overlap?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Rereach on structured document retrieval is highly relevant, especially for RAG approaches. The retrieval is well designed, using a hybrid and adpative query scoring mechanism, using both dense and lexical methods as well as a ranking strategy. The evaluation is thorough, and the paper is well-structured and generally well-written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a framework to improve document retrieval (Multi-Field Adaptive Retrieval, MFAR). Whereas document retrieval typically uses unstructured data for tasks such as retrieval augmented generation, MFAR is designed for structured data where documents have multiple fields (e.g., titles, abstracts, author information). MFAR first decomposes documents into individual fields and then learns a model that adaptively weights fields based on the input query. The framework uses both dense and lexical methods for retrieval, optimizing the combination of these representations per field to improve retrieval performance. Experimens show that MFAR outperforms previous retrieval models in complex document retrieval on the STaRK dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The fine-tuning approach makes the approach specific to a set of fields from a dataset. Information overlap in fields (see lines 416-424) might intrudice some redundancy to the retrieval process." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Are \"Dense only\" in Table 4 and \"mFAR_dense\" in Table 2 the same (the numbers are close but different). Were mFAR_dense and mFAR_lexical trained separately or trained together and one component ablated? \n\n2. See the weakness section, which I phrased as a question." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well motivated and well written. Many documents naturally have structure. Exploiting it should lead to better retrieval quality. However, doing so depends on the query, corpus and kind of retrieval. This paper comes up with an elegant and intuitive formulation to learn all of these weights during training. The baselines are well chosen, ablation experiments are extensive and the references are comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper exploits the structure in various IR corpora by learning query, corpus, field dependent weights across vector retrieval and token retrieval. The paper compares the methods to a number of strong baselines and analyzes a thorough set of ablation experiments to identify difference makers across different benchmarks. This paper is well written, easy to read and has a comprehensive set of references." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness, and kudos to the authors for discussing this, is what they mention in Section 4 \"Multi-field vs Single-field\": \"A side-by-side comparison of the single field models against their multi-field counterparts shows mixed results\". The difference in averages between mFAR_2 and mFAR_all doesn't appear statistically significant. The primary gains (looking at mFAR_2 vs mFAR_all) seem to be in the prime data set which has a lot of fields. Should the paper instead focus on adaptive hybrid retrieval, and choose hybrid retrieval baselines?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. What is exactly used for dense/vector retrieval?\n2. How did you decide on selection of top-100 records for each field (L885)?\n3. In Eq. 4, what is exactly q in G(q, f, m): I assume it should be a dense query encoder, but it's not clear from the text (at least for me)." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "* The paper revisits an important problem.\n* The paper is very well written\n* Experimental results are convincing and have quite a few baselines.\n* The method is simple and elegant yet effective" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This is a timely \"revisit\" of a multi-field retrieval problem with a proposal of multi-field adaptive retrieval that combines sparse-lexical and learned-dense-vector-similarity scores using field-weights, which are learnable and adaptive. Namely, each weight is a neural network incorporating a query representation and a field-specified learned weight vector.\n\nThe paper is very nicely written, it has a convincing set of results, as well as thorough literature review." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "N/A" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Regarding Weakness 1 in the previous section, did you try to use other dense retrievers and do you have any insights about experiments with better retrievers?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper proposes a new approach that incorporates structured document retrieval, where documents contain multi-field information. The proposed method adaptively controls the information by using a weight adjustment strategy based on the query. \n2. The experimental results demonstrate that mFAR framework over various baselines on the STaRK dataset. \n3. The authors present detailed analysis demonstrating why their hybrid approach can outperform baselines through the experiments of both multi-field vs. single-field and hybrid vs. lexical/semantic similarity. \n4. The paper is well-structured, motivated, and written, thus easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Multi-Field Adaptive Retrieval (MFAR), a framework for retrieving structured documents by dynamically weighting document fields based on query relevance. By combining lexical and dense retrieval methods, MFAR improves retrieval accuracy on the STaRK dataset, surpassing state-of-the-art baselines and highlighting the potential of adaptive, hybrid approaches for structured data retrieval." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Though several recent retrievers [1] and [2] have been proposed, Contriever is used as a representative of dense retrievers without sufficient discussion. I am concerned that the conclusion may change if the authors use the recent retrievers. Especially, since FollowIR is instruction-tuned, it may be more robust against negation in the query, which the authors pointed out as a possible issue of the single-field method. \n2. The adaptive weighting mechanism in MFAR relies on training within specific domains and structured datasets, making it potentially sensitive to domain-specific data characteristics. This might lead to suboptimal field selection or scoring when the document structure is inconsistent across the corpus. \n\n[1] Weller, Orion, et al. \"FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions.\" arXiv preprint arXiv:2403.15246 (2024).\n\n[2] Xiao, Shitao, et al. \"C-pack: Packaged resources to advance general chinese embedding.\" *arXiv preprint arXiv:2309.07597* (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "n/a" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- S1: This study is the first to demonstrate that multi-field ranking outperforms single-field ranking within the context of dense retrieval. \n- S2: The experiments highlight that hybrid models enhance both single- and multi-field rankings, with certain multi-field scenarios benefitting more from hybrid approaches.\n- S3: A thorough ablation study demonstrates the significance of the query-conditioning mechanism in optimizing field weights. Additionally, the qualitative analysis shows the variability in optimal scorers across datasets, showing that there is no universally best approach.\n- S4: A detailed case study further illustrates the method’s effectiveness in practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a method called Multi-Field Adaptive Retrieval (MFAR), aimed at enhancing retrieval of structured or multi-field documents. The approach decomposes each document into multiple fields, each independently indexed and retrievable through different models. Each field is associated with a learnable embedding, and an adaptive function is trained to predict the importance of each field based on the query and scoring model. The authors validate MFAR's effectiveness through experiments across multiple datasets, comparing it against prior methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- W1: Although the paper focuses on multi-field ranking, classic methods like BM25F [1] and its subsequent extensions [2,3] are not included in the baselines. Including at least BM25F would provide valuable context and facilitate a more comprehensive comparison.\n- W2: Query-dependent field weighting has been previously explored, such as in table retrieval methods that incorporate both query-dependent and query-independent features [4]. Testing on table retrieval datasets could offer additional insights, as tables represent another structured, multi-field document type.\n- W3: The proposed method adaptively determines the importance of each field given a query and scorer; however, it does not select among scorers, instead requiring calculation of all scoring potentials, thereby increasing computational load.\n\n\n[1] Simple BM25 extension to multiple weighted fields, 2004\n\n[2] Field-Weighted XML Retrieval Based on BM25, 2006\n\n[3] Extending BM25 with Multiple Query Operators, 2012\n\n[4] Web Table Retrieval using Multimodal Deep Learning, 2020" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a framework for retrieval over structured documents that adaptively accommodates multiple scorers." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024multifield,\ntitle={Multi-Field Adaptive Retrieval},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3PDklqqqfN},\nnote={under review}\n}" }, "abstract": { "value": "Document retrieval for tasks such as search and retrieval-augmented generation typically involves datasets that are _unstructured_: free-form text without explicit internal structure in each document. However, documents can have a structured form, consisting of fields such as an article title, message body, or HTML header. To address this gap, we introduce Multi-Field Adaptive Retrieval (mFAR), a flexible framework that accommodates any number of and any type of document indices on _structured_ data. Our framework consists of two main steps: (1) the decomposition of an existing document into fields, each indexed independently through dense and lexical methods, and (2) learning a model which adaptively predicts the importance of a field by conditioning on the document query, allowing on-the-fly weighing of the most likely field(s). We find that our approach allows for the optimized use of dense versus lexical representations across field types, significantly improves in document ranking over a number of existing retrievers, and achieves state-of-the-art performance for multi-field structured data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "information retrieval", "hybrid retrievers", "structured data" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bd8a97f702d3017edea5c219f8668eabb781312a.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multi-Field Adaptive Retrieval" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3PRvlT8b1R
Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs
main
Active
lvlm;hallucinations;reasoning
foundation or frontier models, including LLMs
3;5;5;6
4;4;4;4
2;3;3;3
2;3;3;2
3;3;4;3
4.75
4
2.75
2.5
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How can hallucinatory results be mitigated using the proposed VDGD in the newly defined hallucination categories, compared to other decoding strategies? Analyses through section 3 and 4 are really intriguing, but the reviewer belives that there is significant gap to bridge such motivation and findings into the VDGD design.\n\n\n2. The method of VDGD is limited to merely prefixing self-generated model response and relying on the first generated response that the model predicts (ironically this may include a lot of hallucinatory responses-even if limitation section mentioned this). Considering LLMs are more prone to hallucination snowballing rather than error correction, it is unclear where the performance gains are derived from. Unlike original contrastive decoding, VDGD cannot make logit corrections by counteracting with premature models and relies solely on vocabulary truncation.\n\n\n3. Computational analyses should be conducted such as throughput or latency. Also, can this VDGD be seamlessly applied to beam search decoding? Then, how will be the result comparison?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Through section 3 and 4, in this paper, the authors extensively explored hallucination issues across various benchmarks, models, and decoding strategies. The novel taxonomies beyond simple object hallucination are crucial to understand the current problems in hallucination research areas (particularly LVLM)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors argue that the current LVLMs and hallucination mitigation decoding strategies lack visual perception capabilities. Through extensive analyses, the authors delve into such deficiency across cognitive and real-world benchmarks and categorize more detailed hallucination taxonomies beyond mere object hallucination: Language Hallucination / Vision Hallucinations / Style Hallucinations / IT Hallucinations. At the end, the authors introduce a new decoding strategy, VDGD, which prefixes the model's detailed descriptive response to the given input and truncates grounding of its intermediate responses to refine the final answers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Even with the new hallucination categories, and new findings, their approach, VDGD, lacks of analyzing its effectiveness on the new hallucination categories they defined and limited for its computational costs due to successive response generation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please see weaknesses part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem of cognitive reasoning in hallucination is interesting and seems to be under-explored in previous works.\n2. The analysis is sufficient, solid, and easy to follow, yielding several interesting insights.\n3. The proposed method, although appears to be very simple, is based on the analysis on \"language grounding\" in previous sections, which has a reasonable motivation of such design.\n4. The method demonstrates consistent improvements across benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the cognitive reasoning aspect of hallucination in large vision language models. Through a set of analysis and experiments, it demonstrates that the core blocker of this issue is the difficulty of linking recognized visual concepts to the internal knowledge of LLM. Therefore, the paper further proposes a simple method that per-appendes the image description to the text instruction as the full instruction so that the model can better leverage its reasoning capacity. Evaluation shows that this method can achieve consistent performance improvement on reasoning-focused benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This method seems to be limited in science domain, e.g., chart understanding and reasoning. The underlying assumption of the method is that the image can be sufficiently described by texts. It might hold true for science images, e.g., one can easily describe a chart by enumerating all the involved data or simply transforming the chart figure into a table. However, for natural scenes with complex object categories, attributes, and relations, it is almost impossible to fully represent the image with texts. The evaluated benchmarks seems to be focused on such kind of data.\n2. Based on my first point, I may suspect that the essential reason of the performance improvement comes from that chart figure is more intuitive for human eyes while text descriptions of data is more suitable for LLM to understand. It may has little relation with **cognitive reasoning**.\n3. Also, based on my first point, we'd better not simply regard such science data as reasoning, there can be other forms of reasoning in natural scenes according to some related works [1].\n4. The analysis, though informative, takes too much space, and it may have overlap with previous works [2]. For example, categorization of hallucination types in this paper is essentially based on the **cause** of hallucination, which has been discussed in previous works.\n5. Moreover, the the experiments and investigation of the proposed method seems to be limited. It is better to involve more ablation studies.\n6. The related works is somehow limited. I understand it might be constrained by the space, but it's important to review and discuss related works about hallucination, reasoning, benchmarks, and so on [2] [3].\n\nI will put my initial score as 5 and I hope the authors can resolve my concerns.\n\n[1] Lai et. al. LISA: Reasoning Segmentation via Large Language Model\n\n[2] Bai et. al. Hallucination of Multimodal Large Language Models: A Survey\n\n[3] Liu et. al. A Survey on Hallucination in Large Vision-Language Models" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Sections 3.1 and 3.2, you mention that existing hallucination mitigation techniques improve performance on AMBER but not on other datasets. Could you provide quantitative results for this to show this?\n\n2. In Section 3.3, the algorithm indicates that a base rank less than zero signifies a language hallucination. Since ranks are typically non-negative, could you explain how a rank can be less than zero in this context?\n\n3. The algorithm seems to classify instances as information transfer (IT) hallucinations first, which might influence the distribution of hallucination types. How do you ensure that this approach doesn't skew the results, particularly the higher incidence of IT hallucinations compared to visual and style hallucinations?\n\n4. In Section 4.1, you observe a decline in rank as the text prompt lengthens, suggesting reduced attention to image context. Given that longer prompts naturally introduce more textual context, how do you differentiate between the model's reliance on textual versus visual information in this scenario?\n\n5. Your proposed GPT-based evaluation metric shows a correlation of 0.96 with human responses, while existing benchmarks have a correlation of 0.92. Considering this marginal difference, what advantages does your metric offer over traditional evaluation protocols?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The paper is well written and nicely presented.\n2) The authors present a classification of different types of halluciantions\n3) The authors recognize the gap between visual recognition capability of a model and ability to utilize it for cognitive tasks\n4) The authors propose a training-free strategy to mitigate hallucinations" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of hallucinations in LVLMs. Strictly speaking, the authors provide a way to understand the root cause of such hallucinations in LVLMs. They claim that existing approaches for hallucination mitigation only focus on the visual recognition aspect of the model, and do not dive further into understanding whether the model actually has cognitive skills, thus failing to mitigate hallucinations properly from such models. The authors first conduct a study to investigate the various causes of hallucinations in LVLMs. Then they introduce VDGD, a training-free method to mitigate said hallucinations from an LVLM, and finally propose VaLLu, a benchmark to evaluate cognitive reasoning capabilities of LVLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Sections 3.1 and 3,2 are not that informative about the failure of hallucination techniques on cognitive tasks. The claim in 3.1 that all methods boost performance on AMBER but not on other datasets is weak since even on AMBER, the relative performance increase is small, which is the same for other datasets as well. This goes against the claims of these two sections.\n\n2) In section 3.3, in the algorithm, firstly, currently it says that it is a language hallucination if base rank is less than 0. How is a rank less than zero? I think it should be language hallucination if it does not fall inside the visual elements of the response. Moreover, I think you are using GPT-4 vision, and not just GPT-4 for this? Also, the visual content extraction itself will have hallucinations due to use of llama-3. Further, pushing everything first to IT hallucination definitely skews the outputs of visual and style hallucinations, it can be both. And so the results showing huge IT hallucination compared to the other two is misleading.\n\n3) Experiment on 4.1 showing fall of rank as length of text prompt increases is nice, but it is also bound to happen since the textual context is getting added. This does not definitively prove that no image context is being attended to. Also, the rank difference between the two datasets is just 1. \n\n4) The gpt-type metric the authors propose is claimed to have high correlation with the human responses. But in appendix we see that the other correlations are also quite high, with a normal benchmarks having a 0.92 correlation compared to author's 0.96. This marginal difference is not significant enough to claim for a new type of evaluation protocol." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "### Q1. While this does not affect my score, I believe the terms “perception” and “recognition” should be interchanged in the paper. Perception refers to the basic observation of visuals, while recognition is a more complex process based on what has been perceived. However, the paper appears to use these terms in reverse, which could cause some confusion for readers. \n\nRef: https://en.wikipedia.org/wiki/Visual_perception" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "### S1. The paper is well-structured and comprehensive, providing a smooth overall flow.\n\n### S2. The analysis is in-depth and offers interesting insights.\n\n### S3. The VaLLu benchmark has the potential to serve as a comprehensive benchmark for evaluating hallucinations in models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a comprehensive and extensive analysis on object hallucination. It proposes VDGD, a method that generates descriptions first, which are then used as prompts for a second inference. During decoding, KLD is calculated with the pre-generated descriptions to identify highly deviant candidates. The authors curate several datasets and introduce the VaLLu benchmark for a comprehensive hallucination evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### W1. While the idea is simple and effective, a significant drawback is the latency. The proposed method requires generating long descriptions even for short responses (e.g., in Fig. 8, a single token output would typically be very fast in a baseline method, but the proposed method is much slower). Therefore, it is important to include a latency analysis (e.g., average inference time, throughput) compared to simpler decoding-based hallucination mitigation methods [1,2,3] and baseline LVLMs, especially since many recent training-free methods perform a single inference.\n[1] Don’t Miss the Forest for the Trees: Attentional Vision Calibration for Large Vision Language Models, Arxiv 2024. \n[2] Seeing is Believing: Mitigating Hallucination in Large Vision Language Models via CLIP-Guided Decoding, Arxiv 2024. \n[3] RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in LVLMs, Arxiv 2024. \n\n### W2. Using the description from the first inference as a prompt for the second inference appears to act like a form of Chain-of-Thought (CoT) prompting. The authors should explicitly discuss how VDGD relates to or differs from Visual CoT methods [4,5,6], and compare several aspects (e.g., performance, computational requirements, applicability to different types of tasks). Comparisons or references to recent Visual CoT methods will strengthen the paper.\n[4] Compositional Chain-of-Thought Prompting for Large Multimodal Models, CVPR 2024. \n[5] Beyond Embeddings: The Promise of Visual Table in Visual Reasoning, EMNLP 2024. \n[6] Visual Evidence Prompting Mitigates Hallucinations in Multimodal Large Language Models, OpenReview. \n\n### W3. While each individual analysis is interesting, the overall flow feels somewhat disjointed. The paper presents a categorization of hallucinations and provides extensive explanations for each category, but there are no corresponding experimental results or analysis showing how the proposed method specifically addresses or improves each of these hallucination categories. This lack of connection between the categorization and the method’s effectiveness on each type of hallucination weakens the coherence of the paper. The authors should include a specific analysis or set of experiments demonstrating how VDGD performs on each category of hallucination they've identified. This would help tie together the theoretical framework and the practical application of their method.\n\n### W4. While the in-depth analysis is appreciated, the paper sometimes feels overloaded with content, which can distract from the core focus. At times, it is difficult to follow, and the connection between earlier sections and the methodology feels weak. The dense content also limits the space for method-related experiments, with only one experiment table included in the main paper. Most experiments have been relegated to the appendix, suggesting the need for better content management." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel method to reduce hallucinations and improve LVLM performance on cognitive prompts requiring reasoning and knowledge retrieval." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024visual,\ntitle={Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in {LVLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3PRvlT8b1R},\nnote={under review}\n}" }, "abstract": { "value": "Large Vision-Language Models (LVLMs) often produce responses that misalign with factual information, a phenomenon known as hallucinations. While hallucinations are well-studied, the exact causes behind them remain underexplored. In this paper, we first investigate the root causes of hallucinations in LVLMs. Our findings reveal that existing mitigation techniques primarily reduce hallucinations for visual recognition prompts—those that require simple descriptions of visual elements—but fail for cognitive prompts that demand deliberate reasoning. We identify the core issue as a lack of true visual perception in LVLMs: although they can accurately recognize visual elements, they struggle to fully interpret these elements in the context of the input prompt and effectively link this recognition to their internal knowledge, which is critical for reasoning. To address this gap, we introduce Visual Description Grounded Decoding (VDGD), a simple, robust, and training-free method designed to enhance visual perception and improve reasoning capabilities in LVLMs. VDGD works by first generating a detailed description of the image and appending it as a prefix to the instruction. During response generation, tokens are sampled based on their KL divergence to the description, favoring candidates with lower divergence. Experimental results on multiple visual reasoning benchmarks and LVLMs demonstrate that VDGD consistently outperforms existing baselines 2% - 33%. Finally, we introduce VaLLu, a benchmark designed for comprehensive evaluation of the cognitive capabilities of LVLMs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "lvlm", "hallucinations", "reasoning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c671384a73a119e96e4d7b4f221ba66015f400f0.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3PguviI7Uf
IPDreamer: Appearance-Controllable 3D Object Generation with Complex Image Prompts
main
Active
3D generation;Diffusion model
applications to computer vision, audio, language, and other modalities
3;5;6;6
3;4;2;4
1;3;3;2
2;2;3;2
2;2;2;2
5
3.25
2.25
2.25
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Based on my comments on the strengths and weaknesses, I currently still lean a little bit toward the positive rating. I would like to hear from the other reviewers and the authors during the rebuttal." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The paper is the first time to consider generating 3D objects from complex images. It's quite interesting considering the current progress of the current 2D generative models.\n\n+ The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces IPDreamer which, by leveraging the complex image prompts for the first time, can generate detailed 3D objects. To achieve this task, IPDreamer first proposes an Image Prompt Score Distillation Sampling (IPSDS) that leverages both RGB features and normal features to help guide the denoising process. The authors further introduce a mask-guided compositional alignment strategy that allows for extracting corresponding features from different images of the same objects, further improving the details of the 3D generation. Extensive qualitative and quantitative experiments have been provided in the paper." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Fig.1 is not clear. It's not able to showcase that existing methods struggle with complex images.\n\n- The results showcased are not quite aligned with the input image.\n\n- The masks in Fig.4 are not quite aligned with the corresponding parts.\n\n- It's hard to see the effectiveness of mask-guided compositional alignment. \n\n- The results provided in Fig. 5 are not very good.\n\n- What if we apply the best text-to-2D diffusion model to the DreamFusion or other text-to-3D pipeline and carefully design the text prompts? For example, the text-to-2D diffusion model that's capable of generating complex and high-resolution images." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- I do not understand Figure 1b. What is being generated, a 3D shape or an image? both the leaves and the water ripples look like images!\n\n- What is the difference between equations (11-13) and (14-17)? Are both used during optimization?\n\n- What is the impact of employing the super-resolution model, ControlNet tiling, on the final generated quality?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The idea of splitting complex objects into parts that are optimized jointly is interesting and can be potentially employed for more complicated 3D scenes.\n\n- The method section is comprehensive and provides an overview of SDS, making it self-contained.\n\n- The visual quality of the provided results is compelling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a text/image-to-3D approach for controlling the appearance of generated 3D objects given complex input images where the subject is not clearly identified.\nThe proposed approach encompasses multiple components. \nFirst, IPAdapter image encoder is used to extract image features that are used as texture guidance within the Score Distillation Sampling (SDS).\nTo be able to handle complex images with multiple components, a mask-guided compositional alignment strategy exploits a Multi-Modal Language Model (MLLM) to provide localization part labels given the image and the provided coarse Nerf model.\nThen, cross-attention maps are used to localize those parts by computing attention between the image feature and the textual labels produced by the MLLM.\nFinally, the localized parts are optimized jointly to produce a globally consistent 3D object.\nExperiments show that the proposed approach produces high-quality results that abide by the guidance image." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper primarily focuses on controlling the generation of 3D objects from complex input images. As noted in line 537, \"IPDreamer addresses the limitations of existing text-to-3D and **single-image-to-3D** methods.\" However, the paper does not include comparisons with relevant single-image-to-3D methods, such as [1] and [2]. Could the authors clarify why these comparisons were omitted?\n\n- In Figure 7, the qualitative comparison presents different samples for each method. Conventionally, all methods are evaluated on the same samples to ensure consistency in comparisons. Could the authors provide insight into this choice?\n\n- The proposed method incorporates several additional components beyond the standard SDS pipeline, including ChatGPT, SAM, ControlNet, and IPAdapter. Could the authors provide details on the runtime overhead introduced by each component, as well as the overall runtime?\n\n- The method illustration in Figure 2 appears challenging to interpret. It does not effectively aid in understanding the proposed pipeline, and I found it difficult to correlate it with the text. A more intuitive figure might improve readability and clarity.\n\n[1] Shi, Ruoxi, et al. \"Zero123++: a single image to consistent multi-view diffusion base model.\" arXiv preprint arXiv:2310.15110 (2023).\n\n[2] Voleti, Vikram, et al. \"Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion.\" European Conference on Computer Vision. Springer, Cham, 2025." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness section" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is tackling an important and very challenging 3D genAI problem. Comparing to existing approaches, the IPDreamer could edit the objects using more complex image prompts\n- The introduced prompt score distillation sampling approach is a reasonable formulation that builds on existing SDS approaches, and the masked-guided alignment strategy seems to be highly effective \n- Experimental results suggest that the approach is better comparing to other counterparts. User studies is also provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduced a controllable 3D object generation approach using image prompts (similar to style transfer). The proposed IPDreamer approach is a novel method that could capture the intricacies of appearance features from image prompts, and could generate high fidelity and controllable 3D objects. The approach is tested on some public benchmarks with user studies available as well, and was proven to be effective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think this is a nice paper and a good extension to many of the existing approaches. The final output of the algorithm seems to be good enough. I do have a few clarification questions that I hope the authors could address in future revisions of the papers:\n- The paper leverages GPT-4v as MLLM inputs. How accurate should the MLLM be, in case people don't have access to this advanced MLLM algorithm? Would the output become much worse? \n- It's very nice to conduct user studies for genAI works in general. Could authors provide more demographics information in the appendix section? (age, gender, background, etc)\n- I don't fully understand Fig 1b, especially the right images -- what is the contents in the input and what is the actual real-world application of this particular input/output pair?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper proposes a novel framework for 3D generation by breaking an image prompt into several parts and adopting a multi-guidance optimization process. Experiments demonstrate the effectiveness of the proposed framework.\n* The idea of the paper that breaks the complex images into several parts is interesting and good. Breaking a complex thing into parts makes a hard problem much easier." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper present a novel method to capture intricate appearance features from the Image prompts, which is further used to enhance the alignment of the image prompts with the generated objects. Experiments demonstrate the proposed method generate objects which is well-aligned with image prompts, show better ability in complex generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The written of the paper is not so clear, some details are lack:\n * The description on how to adopt GPT-4v to generate localization prompts is lack in the paper.\n * In Figure 1 (b), the author gives comparison between VSD and IPSDS on text-based generation. But is the proposed method IPSDS need an image prompt? How to compare IPSDS with VSD on text-based generation? Moreover, for the cases in Fig (a), could the author provide the images parts extracted from the reference image of the castles. It’s hard to understand how could we break such things into parts.\n * For eq.9 and eq.10,the author highlights that “they localize the features of the multiple images onto 3D object” in many places such as Line 321-322, 349-350, which makes me very confused. I think the author is adopting eq.9 and eq.10 to fuse information from different image parts to do SDS loss. Therefore, this description is inaccurate and leads to misunderstanding. 、\n * Some annotation in the equations are missing, like $Z$ in eq.9.\n* In line 360, the author declares that a global optimization is further needed, which is achieved by simply concatenating all the features from the multiple images instead of adopting a mask based strategy. Why we need such a global optimization? What if we directly adopt global optimization without the mask-guided one? I think the author should provide such evaluation.\n* Finally, I think the evaluation of the paper is not enough. The accuracy of adopting SAM and GPT-4v to break into parts is not evaluated. Moreover, I think the author should provide more visualization examples on the extracted image parts together with the generation results, which will make overall process easier to understand." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024ipdreamer,\ntitle={{IPD}reamer: Appearance-Controllable 3D Object Generation with Complex Image Prompts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3PguviI7Uf},\nnote={under review}\n}" }, "abstract": { "value": "Recent advances in 3D generation have been remarkable, with methods such as DreamFusion leveraging large-scale text-to-image diffusion-based models to guide 3D object generation. These methods enable the synthesis of detailed and photorealistic textured objects. However, the appearance of 3D objects produced by such text-to-3D models is often unpredictable, and it is hard for single-image-to-3D methods to deal with images lacking a clear subject, complicating the generation of appearance-controllable 3D objects from complex images. To address these challenges, we present IPDreamer, a novel method that captures intricate appearance features from complex **I**mage **P**rompts and aligns the synthesized 3D object with these extracted features, enabling high-fidelity, appearance-controllable 3D object generation. Our experiments demonstrate that IPDreamer consistently generates high-quality 3D objects that align with both the textual and complex image prompts, highlighting its promising capability in appearance-controlled, complex 3D object generation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D generation", "Diffusion model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c22fe46ab2a16e43f80b40343e73afa3d7e36b3d.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f541a717d267e9eaf428aa31a3e9890d7796e65b.zip" }, "title": { "value": "IPDreamer: Appearance-Controllable 3D Object Generation with Complex Image Prompts" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Pn24GOcQ1
Geometry of the Loss Landscape in Invariant Deep Linear Neural Networks
main
Active
Invariant Models;Data Augmentation;Deep Linear Networks;Low Rank Approximation;Regularization
learning theory
3;5;5;6;6
2;4;3;4;3
2;3;2;3;3
1;2;2;3;2
2;2;2;3;2
5
3.2
2.6
2
2.2
0.731925
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "na" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Comparing learning with data augmentation vs an invariant architecture is an interesting problem and linear networks may be a good place to start for a theoretical study. The paper is written rather well but the global structure of the paper (presentation order) is confusing (see Weaknesses)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the three related problems of learning the linear predictor under squared loss. The problem is not simple since the output dimension is $>1$ and it has been known since the seminal work of Baldi & Hornik that there are small-rank optimal matrices that generate saddles in the original problem. The paper studies similar problems in the data augmentation and regularization cases and shows the same loss landscape characteristics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It is not clear how much technical novelty there is in the paper. Proposition 2 is copied from Zhang and Theorem 4 is copied from Trager. Theorem 4 seems rather trivial, how is this novel compared to Baldi and Hornik?\n\nThe landscape complexity questions are interesting for complex losses where the full set of critical points is not known. In this paper, all local minima are global which is a stronger result on the loss landscape (and yes, possible for linear networks). Interestingly, the global minima are equivalent between problems in the infinite data limit, but what happens for finite data? Is there a separation between the problems? Like using invariance is better than using data augmentation? That kind of result would make the paper much more interesting. I might have missed such a point in the paper due to quick reading and confusing organization of the paper. \nI think it'd be better to state the three problems early in the paper similar to \n\nLevin, Eitan, Joe Kileel, and Nicolas Boumal. \"The effect of smooth parametrizations on nonconvex optimization landscapes.\" Mathematical Programming (2024): 1-49." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- I was not able to follow Section 4.3 and would appreciate any clarification on the main conclusion from the experiment or which theorem the experiment seeks to support.\n- Nordenfors et al. (2024) points out that the set of stationary points are identical for data-augmentation and hard-wiring, on both linear and certain nonlinear architectures. Could the authors comment on whether these results are more general than Theorem 3? \n- Can the results in this paper provide useful insights for practitioners? I do not believe a lack of immediate practical implication is a major weakness, but the paper might reach more audience by including some motivation for studying the loss landscape or invariant deep linear networks." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper provides a mathematically rigorous analysis of the loss landscapes for the three approaches and is a good contribution to the study of loss landscapes.\n- By establishing that data augmentation and hard-wiring result in identical critical points, the paper offers a unifying perspective on invariance in optimization. This connection complements recent study on comparing equivariant architectures and data augmentation.\n- The empirical results align well with the theoretical findings, providing concrete evidence that training with data augmentation converges to a critical point that parametrizes an invariant function." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores the loss landscape geometry of invariant deep linear neural networks, focusing on three approaches for enforcing invariance: data augmentation, regularization, and hard-wiring. The paper provides a theoretical comparison, demonstrating that the global optima for all three methods are equivalent in the limit of strong regularization and full data augmentation. It also examines the critical points of the loss landscapes, showing that data augmentation and hard-wiring result in identical sets of critical points, while regularization introduces additional ones. Empirical experiments show that training with data augmentation converges to a critical point that parametrizes an invariant function, data augmentation is computationally more expensive than hard-wiring, and regularization falls in between." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The settings considered in the paper seem limited. In particular:\n- As the authors also acknowledge, this paper focuses on deep linear networks. The results depend heavily on properties almost unique to this type of networks, such as that the network’s function space is a vector space of linear maps. There does not seem to a clear path that could extend the results here to other, especially nonlinear, architectures. \n- The main results are limited to cyclic and finitely generated groups, which does not apply to continuous groups in common datasets, such as rotation and scaling." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Figure 3(a), why doesn’t the non-invariant component of $W$ converge to zero? Additionally, in Figure 4(a), why does the non-invariant component of $W$ increase for data augmentation?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors theoretically solve the optimization problems arising from three different approaches in a bounded-rank linear model and discuss their solutions. This analysis offers fresh insights for the learning-with-symmetry community on the use and comparison of methods to achieve invariance.\n- They not only characterize the global optima of these approaches but also identify all the critical points, demonstrating that regularization leads to a greater number of critical points, which is an interesting result.\n- The authors verify their theoretical findings through experiments on the rotated MNIST dataset, showing that while both hard-wiring and data augmentation converge to the same global optimum, data augmentation demands higher computational costs.\n- The paper is well-written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the loss landscape of three approaches to achieve invariance: data augmentation, network hard-wiring, and regularization. The authors solve the optimization problems arising from each method and find that the first two approaches share the same critical points, whereas regularization leads to a larger number of critical points. Additionally, experiments show that data augmentation indeed results in invariance of the trained network, and that both data augmentation and hard-wiring converge to the same global optimum." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- As noted on line 169, the theoretical scope of this paper is limited to finite and cyclic groups. However, in many real-world applications, the relevant groups are not finite or cyclic, such as permutation group [1], rotation group [2], and sign and basis transformation group [3]. This limitation reduces the practical applicability of the paper’s findings.\n- Some assumptions in the paper lack adequate justification. For instance, in Remark 2, the authors use $Y=WX+E$ to suggest that the rank assumption of $\\overline{Z}^\\mathrm{inv}$ is mild. However, the noise matrix in this example seems unrealistic, as real datasets typically have structured rather than random correlations between data and labels. In Corollary 1, the authors assume that the singular values of three matrices are pairwise distinct, but this assumption is not justified. Verifying whether these assumptions hold in real datasets would improve the paper’s applicability.\n- Some key citations are missing. In Proposition 1, the authors characterize invariant linear maps under a cyclic group. However, [1] previously characterized all invariant and equivariant linear maps for symmetric groups, and [4] extended this work to identify all polynomials equivariant under symmetric groups.\n\n[1] Maron, H., Ben-Hamu, H., Shamir, N., & Lipman, Y. (2018, September). Invariant and Equivariant Graph Networks. In International Conference on Learning Representations.\n\n[2] Dym, N., & Maron, H. On the Universality of Rotation Equivariant Point Cloud Networks. In International Conference on Learning Representations.\n\n[3] Ma, G., Wang, Y., Lim, D., Jegelka, S., & Wang, Y. (2024). A Canonicalization Perspective on Invariant and Equivariant Learning. arXiv preprint arXiv:2405.18378.\n\n[4] Puny, O., Lim, D., Kiani, B., Maron, H., & Lipman, Y. (2023, July). Equivariant Polynomials for Graph Neural Networks. In International Conference on Machine Learning (pp. 28191-28222). PMLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* The paper focuses on deep linear networks for tractability, but how do you anticipate the results extending to non-linear neural networks, which are more prevalent in practical applications? Have you considered the potential challenges or modifications needed for such generalization?\n\n* How do you expect the optimization landscape and critical points to change when considering more complex or non-standard invariance structures? Could your theoretical framework be adapted to handle these?\n\n* Given the computational efficiency noted in your experiments, how scalable are the findings, particularly regarding the comparison between data augmentation and hard-wiring, when applied to much larger models or datasets, such as in convolutional or transformer networks?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper provides a deep theoretical comparison of different approaches to enforce invariance (data augmentation, regularization, and hard-wiring). By proving the equivalence of global optima across these methods and analyzing critical points in function space, the authors offer valuable insights into the optimization landscapes of invariant models.\n\n* The paper specifically addresses the impact of invariance in deep linear networks, which is a significant area in machine learning. By narrowing down the study to a structured problem, it successfully derives concrete results that are applicable to broader, more complex architectures.\n\n* The combination of theoretical results with empirical validation is a strong aspect of this paper. The authors provide experimental evidence supporting their theoretical conclusions, such as the similarity in performance and convergence rates of data augmentation and hard-wired models. This connection strengthens the practical relevance of the theoretical findings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores how different methods for enforcing invariance in neural networks—such as hard-wiring, regularization, and data augmentation—affect the loss landscape and optimization process. Focusing on deep linear networks, the authors compare these approaches in terms of their critical points and global optima. They show that for rank-constrained linear maps, both hard-wiring and data augmentation yield the same critical points, most of which are saddle points except for the global optimum. Regularization, while producing more critical points, eventually converges to the same global optima. The study provides theoretical insights into how these methods influence learning processes in machine learning models and helps explain their performance in reducing sample complexity. The authors also present experimental results to demonstrate convergence behavior in practical settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper focuses exclusively on deep linear networks, which are a simplified model of neural networks. While this approach allows for clear theoretical insights, the results may not fully generalize to more complex architectures, such as non-linear or deep convolutional networks that are commonly used in real-world applications.\n\n* The study centers on particular group-invariant structures, which might not cover a wide range of practical invariance cases. Invariance to more complex transformations, such as non-linear or higher-dimensional transformations, may require different analyses, limiting the applicability of the results to a broader set of machine learning problems." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**Scientific quality:**\n\nLines 74-76 - how is this intuitive/obvious? It’s important to give that intuition. To play devil’s advocate, if it’s obvious then it’s even moreso important to clarify why this is studied and what new insight was achieved if any.\n\nLines 234-236 - isn’t it possible to formulate this for any group with a countable number of generators?\n\nLines 243-245 - do all roots work equally well? Assuming this corresponds to several classes of solutions no?\n\nLines 267-269, remark 2 - this is a good example but it’s unclear how typical this is, although it makes some intuitive sense that it would be. Making that clearer could be nice, are there more grounded reasons to believe some analogy of this generally holds? Seems related to the manifold hypothesis - you’re assuming there’s some latent structures and small deviations from it. Also although the rank can technically be large it might have many small singular values, no?\n\nLine 395-396 - why change Adam’s betas?\n\nFigure 1 - it’s quite interesting how results are similar for CE even though the theorems don’t hold for it, is this discussed anywhere?\n\nFor the hard-wiring experiments why not use a different B with a different size? Eg. to make all cases have a similar number of parameters, although their expressivity is clearly the same.\n\nLines 486-496 - this is a nice decomposition but I believe I saw it in other works, are you aware of it appearing elsewhere in the literature?\n\nLine 496-497 - why is the double descent interesting? Do you mean the orange line in figure 3.a?\n\n**Clarity:**\n\nLine 41 - “have shown” how? Feels like a citation’s needed.\n\nLines 47-49 - where in the paper are the solutions studied/referred to? Eg. when are the regularised solutions invariant? This is implicitly shown but not discussed. \n\nLines 123-124 - deteriorates how so? This is interesting and seems relevant here, consider slightly expanding.\n\nLines 145-146 - why is the tangent space relevant here? I didn’t understand where it was heavily used throughout the paper.\n\nLines 162-170 - missing caption?\n\nLines 169-170 - is finite and cyclic defined anywhere?\n\nLines 285-293 - the connection to manifold regularisation is interesting but it’s unclear what it adds - what’s lost if it’s removed? What does it say in this context?\n\nLines 295,296 - isn’t \\bar{Z(\\lambda)}^{reg} defined twice? Is it just different forms of the same expression? Generally this theorem’s intuitive meaning/interpretation is unclear.\n\nLines 303-305, thm 2 - why wouldn’t it be continuous, due to the rank constraint? Generally solutions to L2 regularisations are continuous so recommend making this clearer.\n\nLines 333-336 - this is quite interesting, it would be nice to discuss this - why it happens, potential implications, etc.\n\nMany of the previous kinds of comments about intuition/interpretation and what vs why hold for section 3.4 as well.\n\nLines 373-377 - does spurious here mean suboptimal? Recommend clarifying.\n\n**Minor points, suggestions, etc.:**\n\nIn the first paragraph what about classical examples eg. graphs/images?\n\nThere are some papers that weren’t mentioned which could be relevant throughout the paper, eg. [1-3, 7]. [2] specifically has some potential overlap and I recommend clarifying what’s different than their work. [7] might have overlap with section 4.3, specifically the weight decomposition.\n\nRecommend merging section 1.1 with 1.\n\n[4] and generally Saxe/Ganguli’s works are relevant both in spirit and regarding results, at least as some of the first to study deep linear networks in modern deep learning.\n\nLines 270-274 basically say that you’re taking projections and as everything is linear it’s fine, which is nice but can be spelled out more explicitly.\n\nSection 3.3 - can anything meaningful be said about the case where the symmetry is only “on average” embedded in the data, so only partial group orbits are included?\n\nSome small experiments with regular nonlinear networks showing whether these results hold and if so then to what extent would be instructive.\n\n[1] Gerken, Jan E., and Pan Kessel. \"Emergent Equivariance in Deep Ensembles.\" arXiv preprint arXiv:2403.03103 (2024).\n\n[2] Lyle, Clare, et al. \"On the benefits of invariance in neural networks.\" arXiv preprint arXiv:2005.00178 (2020).\n\n[3] Fuchs, Fabian Bernd. Learning invariant representations in neural networks. Diss. University of Oxford, 2021.\n\n[4] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. \"Exact solutions to the nonlinear dynamics of learning in deep linear neural networks.\" arXiv preprint arXiv:1312.6120 (2013).\n\n[5] Olah, C., Cammarata, N., Voss, C., Schubert, L., and Goh, G. Naturally occurring equivariance in neural networks. Distill, 2020. doi: 10.23915/distill.00024.004. https://distill.pub/2020/circuits/equivariance.\n\n[6] Gruver, Nate et al. (2023). “The Lie Derivative for Measuring Learned Equivariance”.\nIn: The Eleventh International Conference on Learning Representations. url: https:\n//openreview.net/forum?id=JL7Va5Vy15J.\n\n[7] Gideoni, Yonatan. \"Implicitly Learned Invariance and Equivariance in Linear Regression.\"\n\n**Decision:**\nAs it is I recommend rejecting this paper mostly on the grounds of clarity, but that’s assuming that it presents a deeper novelty than what it currently seems to. It’s unclear if it has meaningful implications for real networks but it might still be insightful to consider this toy case, although this remains to be seen." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**Novelty:**\n\nThis work studies an important, timely question - the relation between hard-wiring and learning symmetries. Although not explicitly stated, this would have implications for network design and generally when should one hardwire a symmetry vs just imbue it in the data and whether if there is a hidden symmetry will it be learnt.\n\n**Scientific quality:**\n\nThe setting nicely relates between the three different cases, albeit naturally in the limited linear case given here. \n\n**Clarity:**\n\nThe paper is well written and neatly organised. It has a good, logical flow, giving examples and generally trying to expand on theorems where possible. Section 2 is especially well, concisely presented given the large background." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates how different methods of implementing invariance - by having it hard wired, imbued in the data, or encouraged using regularisation - affect optimisation. They study the simple case of deep linear networks, rank-constraining them to make the optimisation non-convex. In this case the three different settings share the same optimum in the regulariser’s limit and have the same critical points, with the regularisation having more than the hard-wiring/data-augmentation cases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Novelty:**\n\nAs there are many works trying to answer this question it’s currently unclear what this one adds that others don’t. It’s known that networks can learn to respect symmetries, with empirical results being given eg. in [5-6]. [2] looked at using data augmentations vs feature averaging, which is not the same but similar to the data augmentation vs hardwiring cases here. These are select examples but still, the contrast to existing works isn’t as clear as it would ideally be.\n\n**Scientific quality:**\n\nIt’s unclear how the rank-constrained setting is related to real cases. In practice networks are universal - linear networks are often assumed for tractability but is limiting the expressivity realistic, and if so then how? Is it assumed solely to make the loss landscape nonconvex and if so, why not get that through a myriad of other ways? Recommend clarifying this.\n\nThere’s a special focus on cyclic and finite groups, without a clear explanation/motivation why. Do these encompass many of the common groups in geometric deep learning? What do they fail to describe?\n\nLines 397-399 - recommend showing how some weights tending to the same value results from the theorems. \n\n**Clarity:**\n\nThis paper suffers from the page limit as many similar theoretical deep learning papers do. This stops the authors from sufficiently expanding on the different results and giving more intuition for their theorems. Still, currently there’s an insufficient emphasis on the “why” relative to the “what”. For example, the abstract details the problem, the setting, and the results, but not what they mean, hence not explicitly answering the problem. This problem is evident on many levels in the main body as well. This is evident also in the contributions section (1.1). Other than that high level, theorems are given with little intuition as to why they hold or what they mean. The paper has a decent information density as it is so it’s naturally difficult to accommodate everything, but it can not only leave the reader confused but more importantly make it harder to understand how the results tie together and get at the underlying problem. Another example is the related work section - it’s unclear what important context these works give relating to this study, and if they don’t then why they are mentioned at all.\n\nThroughout the paper numbers/axis titles on plots should be bigger.\n\nLines 47-49 - The stated problem is different than the impression one gets from the abstract - the latter implies “can symmetries be learnt and how does their optimisation look” whereas the former says something else. Recommend rephrasing either or both.\n\nLine 53 - what does benevolent mean here? Recommend replacing.\n\nLine 71-72 - unclear what’s meant here by linear invariant neural networks given the different settings.\n\nLines 197 - shouldn’t det-variety be defined? It’s not a well known term, and if it’s not important enough to be defined it should be delegated to the appendix or not used.\n\nLine 211 - what’s r in this context?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We compare the optimization landscape of linear networks models made invariant via hard-wiring, regularization, and data augmentation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024geometry,\ntitle={Geometry of the Loss Landscape in Invariant Deep Linear Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Pn24GOcQ1},\nnote={under review}\n}" }, "abstract": { "value": "Equivariant and invariant machine learning models seek to take advantage of symmetries and other structures present in the data to reduce the sample complexity of learning. Empirical work has suggested that data-driven methods, such as regularization and data augmentation, may achieve a comparable performance as genuinely invariant models, but theoretical results are still limited. In this work, we conduct a theoretical comparison of three different approaches to achieve invariance: data augmentation, regularization, and hard-wiring. We focus on mean squared error regression with deep linear networks, which parametrize rank-bounded linear maps and can be hard-wired to be invariant to specific group actions. We show that the optimization problems resulting from hard-wiring and data augmentation have the same critical points, all of which are saddles except for the global optimum. In contrast, regularization leads to a larger number of critical points, again all of which are saddles except for the global optimum. The regularization path is continuous and converges to the hard-wired optimum." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Invariant Models", "Data Augmentation", "Deep Linear Networks", "Low Rank Approximation", "Regularization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5f7eea1d0a26a432bd7c978b7db58c0de86178cc.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Geometry of the Loss Landscape in Invariant Deep Linear Neural Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Q7y9No9VF
A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction
main
Active
Traffic Prediction;Mixture of Experts;Deep Learning;Spatio-Temporal data modeling
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;5;5;5
5;4;4;4
3;3;3;3
2;2;2;2
2;3;3;2
4.5
4.25
3
2
2.5
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Same as the weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces two innovative components: a variable-centric modeling for traffic forecasting and a prior knowledge-centric modeling in the gating mechanism to anneal the overfitting/suboptimal problem.\n\n2. The ablation study effectively demonstrates the impact of these components on traffic speed prediction tasks.\n\n3. The paper includes comprehensive comparative experiments on two traffic speed datasets with details. It helps reproduce the work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "TITAN offers an innovative approach to traffic prediction by addressing the limitations of sequence-centric models, which often miss variable-centric interactions. TITAN bridges this gap through a combination of sequence-centric, variable-centric experts, and a prior knowledge-centric expert.\n\nThe model’s prior knowledge-centric strategy supervises routing, enhancing accuracy, while an expert annealing strategy reduces leader reliance during training for better adaptability. Empirical results show TITAN outperforms the state-of-the-art." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty presented in Section 3 is limited, as much of the content describes existing methods. More emphasis is needed on detailing the unique contributions of this work. What is the difference in the variable-centric modeling between your work and [2]?\n\n2. The paper is not well organized. For example, in the introduction section, you mentioned that current studies focus on sequence-centric modeling rather than variable-centric modeling. Since this serves as the motivation for your research, it is essential to clearly define what sequence-centric modeling and variable-centric modeling entail. Additionally, you should explain why a variable-centric approach is important. One figure can help explain it. \n\n3. The Figure 1 is unclear. The prior section needs to be redone to better illustrate its connection with the memory component. Additionally, the representation of hidden states should be clearer, as the current color scheme makes it difficult to distinguish whether the hidden states in the routing process come from a variable-centric or sequence-centric approach. Furthermore, clarification is needed regarding whether the two sets of QKV (query, key, value) weights are shared or independent. Lastly, the output appears to be isolated from the MoE routing section, which raises concerns about the cohesiveness and interaction between these components.\n\n4. Traffic flow typically refers to the number of vehicles passing along a specific road segment. To develop a model for traffic flow data, it is recommended to conduct experiments using established datasets such as PEMS03, PEMS04, and LargeST. To demonstrate that your model is effective for general traffic forecasting tasks, it is advisable to validate its performance across a variety of datasets that include not only traffic flow but also traffic density/occupancy data.\n\n5. Some results in Table 1 are from other papers, like TESTAM, MegaCRN [1]. It is necessary to claim it in the paper.\n\n6. More case studies can help show the contribution of your work. For example, you can provide examples of challenging cases where the inclusion of the prior knowledge-guided gating mechanism leads to significant model performance improvements.\n\n[1] Lee H, Ko S. TESTAM: A Time-Enhanced Spatio-Temporal Attention Model with Mixture of Experts[C], ICLR 2024.\n\n[2] Haotian Gao, Renhe Jiang, Zheng Dong, Jinliang Deng, Yuxin Ma, and Xuan Song. Spatial-TemporalDecoupled Masked Pre-training for Spatiotemporal Forecasting, April 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In lines 246-247, you mention that 'all sequence-centric models share the same inductive bias, which limits the performance of the MoE model.' The three sequence-centric experts are heterogeneous with differing inductive biases. Furthermore, why does shared inductive bias constrain the performance?\n\n2. In lines 250-252, you state that 'the variable-centric model does not share the same hidden state structure.' Could you provide evidence or a theoretical explanation for why this characteristic complicates control by the MoE routing mechanism?\n3. How does the annealing routing method perform compared with other advanced MoE routing strategies? \n4. How would TITAN likely perform in different spatio-temporal applications beyond those already tested in your study?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Meaningful Research Problem**: Though MoE has demonstrated extraordinary capacity in NLP and CV fields, its application in spatio-temporal (ST) problems is relatively underexplored. Therefore, the paper contributes to the understanding of MoE's potential in ST applications.\n2. **Exploring from Important Aspects:** The paper explores heterogeneous expert design and stable routing strategies, which are crucial steps when applying MoE to spatio-temporal applications. \n3. **Clarity in Writing**: The paper is fairly well-structured, with a clear explanation of the proposed model and its components, making it accessible for readers to understand the methodology and contributions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a Heterogeneous Mixture of Experts (MoE) model named TITAN for traffic flow prediction. The model addresses the limitations of existing sequence-centric traffic prediction models by incorporating variable-centric and prior knowledge-centric modeling techniques. TITAN consists of three sequence-centric experts, one variable-centric expert incorporated by low-rank adaptive matrices, and a leader expert to supervise routing decisions. An expert annealing strategy is further employed to gradually reduce supervision from the leader expert during training. Experiments on two public datasets, METR-LA and PEMS-BAY, demonstrate that TITAN outperforms state-of-the-art models in terms of prediction accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Unclear Motivation**: \n\n **a) Missing definitions:** The terms 'sequence-centric' and 'variable-centric' are mentioned multiple times, but their definitions are unclear. This lack of clarity makes it difficult to understand why existing models belong to differing categories and what specific limitations they have. For examples:\n\n - The paper argues that models like GraphWaveNet cannot learn variable-centric representations. However, these models include cross-variable modeling through variable embeddings, raising questions about the necessity for an additional variable-centric approach. (Line 56-57). \n - The paper states that weighted averaging of sequence-centric and variable-centric modeling is ineffective, but no concrete explanation is provided to support this claim. (Lines 61-62)\n\n **b) Lack of Motivation for MoE Adoption**: The paper does not adequately explain why MoE is suitable for traffic prediction (Lines 63-72).\n\n **c) Scope Mismatch**: A significant portion of the paper is dedicated to the limitations of spatio-temporal prediction approaches, yet the paper only focuses on traffic prediction. \n\n2. **Insufficient Challenge Identification:** Suboptimal routing at the early stage of training, as mentioned in line 78, is a well-known issue with MoE models, and numerous solutions have been proposed over the years. As a paper aiming to adapt MoE to traffic prediction, there lacks in-depth thinking about domain-specific challenges, making the paper's contribution limited.\n\n3. **Limited Method Novelty and Unclear Interpretation:**\n\n **a) Sequence-centric design:** The sequence-centric experts used in this work are similar to previous approaches [1, 2]. However, the paper neither provides a clear motivation for using these experts nor highlights how they differ from homogeneous ST-MoE approaches [3]. In addition, the relevant refs [2, 3] are not cited or compared.\n\n **b) Variable-centric design:** The variable-centric expert design closely resembles the ideas proposed in itransformer [4], which is not cited or compared. Moreover, itransformer was initially designed for time series forecasting, and it may not be well-suited for modeling dynamic spatial dependencies as discussed in [5].\n\n **c) Gating network design:** The motivation for altering the classical sparse gating network into the proposed form in Eqn. (8) is not sufficiently explained. Furthermore, the paper suggests that relationships between nodes are often sparse in urban traffic flow forecasting (Lines 295-297), which contradicts Eqns. (3) and (4) that model global dependencies among all nodes.\n\n4. **Insufficient experiment:**\n\n **a) Unconvincing baselines results:** The paper simply copies the baselines results from ref [1]. Differences in computing resources and the absence of repeated experiments with varying random seeds cast doubt on the reliability of the results.\n\n **b) Lack of in-depth comparison with homogeneous MoE models [3].**\n\n **c) Concerns with the model efficiency:** MoE models are typically known for their efficiency, but TITAN falls short in this regard. As shown in Table 3, the inference time of TITAN is even longer than that of GWNet, raising concerns about its practical applicability.\n\n## Reference\n\n[1] Hyunwook Lee, et al. TESTAM- A Time-Enhanced Spatio-Temporal Attention Model with Mixture of Experts. ICLR2024.\n\n[2] Wenzhao Jiang, et al. Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction. KDD 2024\n\n[3] Shuhao Li, et al. ST-MoE: Spatio-Temporal Mixture-of-Experts for Debiasing in Traffic Prediction. CIKM 2023.\n\n[4] Yong Liu, et al. iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. ICLR 2024.\n\n[5] Zezhi Shao, Exploring Progress in Multivariate Time Series Forecasting: Comprehensive Benchmarking and Heterogeneity Analysis. TKDE 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How can the Memory attention expert select which parts of the memory should be kept? \n2. There is confusion regarding the variable-centered experts. From my understanding, these experts should focus on multiple variables like inflow, speed, and demand in a given area. However, the experiment only uses speed data, which raises the question of how variable-centered experts differ from sequence-centered ones. If both experts operate on a single dataset, their impact seems nearly identical.\n3. In the annealing routing method, how do you resolve the potential conflict between the DTW method, which assumes static temporal patterns, and the sequence-centric expert, which dynamically captures time dependencies?\n4. This paper claims to use a graph-based approach, but there is little mention of graph construction details. For instance, how is the adjacency matrix generated? Is it provided by the dataset or learned during training?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper presents a novel combination of experts, addressing temporal focus dependency, spatio-temporal relationships, and memory collection strategies. Together, these experts effectively capture the intricate correlations within the data. Additionally, the framework design of TITAN allows for flexibility and adaptability, making it applicable to a wide range of spatio-temporal prediction tasks.\n2. The main experiments employ multiple state-of-the-art methods for evaluation, with TITAN outperforming all the compared approaches.\n3. The topic about the Mixture of Experts (MoE) framework is worth investigating, as it tackles the challenge of coordinating specialized experts to capture diverse dependencies in complex data and each expert could be adapted to other domains." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper propose TITAN framework, which incorporates sequence-centric, variable-centric experts and a leader expert supervising the routing process to capture the complex spatio-temporal dependencies. They have also designed an expert strategy to improve the performance of the memory query process of MoE. Extensive experiments have also been performed on real world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper introduces a Memory Attention Expert for long-term prediction but lacks an explicit priority mechanism for memory storage. While attention can select important components based on similarity, it doesn’t fully solve long-term prediction issues. If past relevant information is forgotten, the expert may fail to capture long-term patterns effectively.\n2. The DTW matrix for prior knowledge may hinder the performance of the sequence-centric expert. DTW assumes static temporal patterns based on historical data, which may conflict with the sequence-centric expert's goal of dynamically learning time dependencies from the input. This reliance on fixed temporal similarities could limit the ability of the expert to adapt to new time patterns during training.\n3. The experiments raise some concerns, as the paper focuses on traffic flow, but the datasets used are for speed, which creates a disconnect between the methods and the topic, making the results less convincing.\n4. The experiments should include longer prediction intervals, as the current 15-60 minute range is insufficient to fully evaluate the memory attention expert, which is intended to be more beneficial for long-term prediction tasks.\n5. Table 1 contains a typo regarding the number of real-world datasets. It claims to use three, but only two are included in the experiments.\n6. No code has been provided, making it difficult to evaluate the methods and reproduce the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see in weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper studies a critical task, i.e., traffic forecasting, which has wide-ranging applications in smart cities, autonomous driving, and transportation management.\n2. The model design intuitively mirrors the real-world interactions in traffic systems, where periodic temporal patterns and cross-node relationships need to be captured for accurate forecasting.\n3. The experimental results showcase TITAN’s superior performance across two benchmark datasets (METR-LA and PEMS-BAY), with improvements over state-of-the-art models in three popular metrics across different prediction horizons." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces TITAN, a heterogeneous mixture of expert model designed to improve traffic forecasting. TITAN integrates both sequence-centric and variable-centric modeling techniques alongside a supervised routing mechanism driven by prior knowledge. By leveraging four distinct expert groups and a low-rank adaptation method, the model aims to capture diverse spatio-temporal dependencies in traffic data. Experimental results on two real-world datasets show performance improvements over state-of-the-art models, demonstrating the effectiveness of TITAN." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of the proposed model appears limited. The three types of sequence-centric modeling experts have been extensively studied in previous literature. Furthermore, the idea of variable-centered modeling seems to draw heavily from the iTransformer [1], which has already explored variable-specific tokenization and attention mechanisms.\n2. This paper lacks detailed analysis regarding the computational complexity of the proposed model, making it difficult to evaluate the model's efficiency and scalability. Since attention mechanisms typically exhibit quadratic complexity with respect to the number of nodes, this could result in substantial computational overhead when scaling the model to large-scale road networks.\n3. The evaluation scope is limited. The paper focuses on only two small datasets (i.e., METR-LA and PEMS-BAY), which may not be representative of broader traffic forecasting tasks. Testing the model on larger datasets, such as those with more complex and extensive urban traffic networks (e.g., LargeST [2]), would provide a better assessment of its real-world applicability.\n4. Some important recent studies, such as [3] and [4], are not discussed in the related works.\n\n[1] Liu, Yong, et al. \"iTransformer: Inverted Transformers Are Effective for Time Series Forecasting.\" In The Twelfth International Conference on Learning Representations.\n\n[2] Liu, Xu, et al. \"Largest: A benchmark dataset for large-scale traffic forecasting.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[3] Li, Shuhao, et al. \"ST-MoE: Spatio-Temporal Mixture-of-Experts for Debiasing in Traffic Prediction.\" Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023. \n\n[4] Jiang, Wenzhao, et al. \"Interpretable cascading mixture-of-experts for urban traffic congestion prediction.\" Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Q7y9No9VF},\nnote={under review}\n}" }, "abstract": { "value": "Accurate traffic prediction faces significant challenges, necessitating a deep understanding of both temporal and spatial cues and their complex interactions across multiple variables. Recent advancements in traffic prediction systems are primarily due to the development of complex sequence-centric models. However, existing approaches often embed multiple variables and spatial relationships at each time step, which may hinder effective variable-centric learning, ultimately leading to performance degradation in traditional traffic prediction tasks. To overcome these limitations, we introduce variable-centric and prior knowledge-centric modeling techniques. Specifically, we propose a Heterogeneous Mixture of Experts (TITAN) model for traffic flow prediction. TITAN initially consists of three experts focused on sequence-centric modeling. Then, designed a low-rank adaptive method, TITAN simultaneously enables variable-centric modeling. Furthermore, we supervise the gating process using a prior knowledge-centric modeling strategy to ensure accurate routing. Experiments on two public traffic network datasets, METR-LA and PEMS-BAY, demonstrate that TITAN effectively captures variable-centric dependencies while ensuring accurate routing. Consequently, it achieves improvements in all evaluation metrics, ranging from approximately 4.37\\% to 11.53\\%, compared to previous state-of-the-art (SOTA) models. The code will be released upon acceptance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Traffic Prediction", "Mixture of Experts", "Deep Learning", "Spatio-Temporal data modeling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1a82d2d6cec57cfc4dd8b2aaad98afc138f3b578.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "A Time Series is Worth Five Experts: Heterogeneous Mixture of Experts for Traffic Flow Prediction" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3QinqLlMCj
PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting
main
Active
Generalized Pose-Free Novel View Synthesis;3D Reconstruction
applications to computer vision, audio, language, and other modalities
3;5;6;6
5;4;3;4
2;2;3;3
2;3;3;3
2;3;3;3
5
4
2.5
2.75
2.75
-0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why does using the pixel-wise depth offset estimation model promote consistency across views? (line 211)\n2. How about the performance of PF3plat in dynamic scenes?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The task of novel view synthesis from unposed images in a single feed-forward pass is highly practical.\n2. The paper demonstrates state-of-the-art results on Re10k and ACID, showcasing the effectiveness of the proposed method. \n3. The refinement modules designed in the paper have significantly improved the effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces PF3plat, a novel framework designed for novel view synthesis from unposed images in a single feed-forward pass. PF3plat leverages pre-trained monocular depth estimation and visual correspondence models to achieve an initial coarse alignment of 3D Gaussians. Subsequently, PF3plat incorporates refinement modules and geometry-aware scoring functions to further refine the depth and pose estimates derived from the coarse alignment to enhance the quality of view synthesis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. PF3plat leverages a robust solver for pose estimation between each pair of cameras; thus, increasing the number of viewpoints significantly extends the feed-forward pass time.\n2. PF3plat relies on the coarse alignment of 3D Gaussians, and a small overlap may affect the quality of the correspondence model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As I already discussed in the weakness section, it would be very imporatnt if you can justify those points in the experiments." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors tackle the problem of 3D Gaussian splatting from unposed sparse images, which is an interesting and important topic\n2. The authors apply the recent state-of-the-art depth estimation and pose estimation methods for coarse pose alignment, and further introduces pose and depth refinements, to some extend improving the final performance.\n3. The paper is overall well-written and easy to follow in most parts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a method to tackle the challenging problem of novel view synthesis (NVS) from unposed images in a feed-forward manner. They identify the issue in previous pixel-aligned 3DGS methods, where the predicted 3D Gaussians from different views have the problem of misalignment, leading to noisy gradient flows and poor NVS results. They propose a method where they don’t need to rely on the poses obtained from off-the-shelf tools. Instead, they leverage pre-trained monocular depth estimation and visual correspondence models to obtain coarse alignment, with further refinement of the depth map and poses. Results show that among the pose free methods, they can perform decently better results for NVS tasks. However, for pose estimation, it is still worse than general methods like Mast3R." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**The method section is overall clear, but missing some details and discussions**\n\n*Unclear descriptions in camera pose refinement in Sec 3.2.3* \n1. It is quite unclear to me how exactly it is done. From your writing, it makes me feel that you first will get a newly computed camera poses $\\hat{P_{ij}}$ similarly as done in the coarse step, but with the refined depth. This $\\hat{P}_{ij}$ is already the refined poses, right? However, you further have another refinement step as shown in Eq. (2). What are the rationale before?\n2. what is the T_pose network? What is the E_pos in eq. (2)?\n\n*Cost volume* \nIn section 3.2.4, for the “cost volume construction and aggregation”, is that any different to the MVSplat paper? Can you justify the differences?\n\n*2D-3D Consistency loss* \nLine 291-292, you said that you improve the robustness of the model in regions with low texture or significant viewpoint changes. However, the correspondences from the feature matching methods like LightGlue do not provide many correspondences in those low-texture regions. I guess you cannot claim the robustness there? \n\n*Unclear implementation details* \nWhat is the frame distance from 45 to 75? Do you mean you sample one frame every 45/75 frames in the video sequence? You might want to make this point clearer. And why for DL3DV, you only sample every 5 or 10 frames, way smaller than ACID or RE10LK?\nAlso, you mention that you train for 40000 iterations on a single A6000 GPU, is it the same for all three datasets? If true, I think it might not make too much sense? As you mentioned in line 351-354, RE10K has ~21K videos from training, while you only use 2K scenes for the training of DL3DV?\n\n\n\n**The experimental section is convincing in general, but lacks some important experiments / baselines and explanation.**\n\n1. For pose estimation comparison, the sota method right now is RoMa [1], I think it is fair to ask to compare to it.\n2. I wonder why your method is still lacking behind Mast3R on the pose estimation for both RE10K and ACID, even if Mast3R is is not even trained at all on those dataset, while your method was trained on those dataset separately.\n3. Why for novel view synthesis in Table 1, you don’t show the comparison to the recent pose-required methods pixelSplat?\n4. Based on the experiments you show, your pose estimation is worse than Mast3R in almost all cases, and the NVS results are also worse than MVSplat in all scenarios (even on DL3DV in Table 3, where MVSplat was not trained on). One reasonable baseline to me is, get camera poses from Mast3R, and then run MVSplat directly. I wonder how your method compares to such a simple baseline?\n5. In your ablation study table 4, what is the point of adding V, I-I, I-II, I-V, if they are just all N.A.? That is really weird to me. You can just describe them in texts.\n\n\n**Writing** \nSec 3.2.1: You use two paragraphs to motivate by mentioning the limitations of the previous methods. The real content for the coarse alignment is really just the third paragraph between line 186-191. The motivations part is actually unrelated to the coarse alignment but more on why your method is needed, so you should just put them in the introduction instead.\n\n[1] Edstedt et al.: RoMa: Robust dense feature matching, CVPR 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1 - How would the results differ if alternative coarse prediction networks, such as Dust3r[1], Mast3r[2], or others, were used?\n\nQ2 - Qualitative results for the pose estimation task.\n\n[1]Wang S, Leroy V, Cabon Y, et al. Dust3r: Geometric 3d vision made easy[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 20697-20709.\n[2]Leroy V, Cabon Y, Revaud J. Grounding Image Matching in 3D with MASt3R[J]. arXiv preprint arXiv:2406.09756, 2024." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1 - This paper addresses a meaningful task with significant potential for real-world applications.\n\nS2 - This paper leverages two robust pre-trained models to generate initial pose and shape estimates, which significantly enhance the model's performance.\n\nS3 - The paper is well-written, and the experiments are logically sound." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the pose-free feed-forward novel view synthesis task. It leverages a pre-trained monocular depth estimation model and a visual correspondence model to generate coarse pose and depth estimates, enhancing the stability of the 3DGS training process. Subsequently, a lightweight refinement model is used to further improve the depth and pose estimations. Extensive experiments have been conducted to demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1 - The performance of this method appears to be highly dependent on the coarse pose and depth results provided by the pre-trained model.\n\nW2 - The paper lacks qualitative results for the pose estimation, which would provide a clearer assessment of the model's performance in this area." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the authors provide more insights on how you obtain the coarse camera parameters? I guess the authors directly implemented the existing work to obtain those things.\n2. the authors trained different checkpoints on different datasets in the implementation details. Does it mean the 'feed-forward' claimed by the authors is actually dataset-specific? If so, I think it is a big limitation of the proposed method lacking the generalizability to different scenes." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors proposed a new pose-free feed-forward method in camera pose estimation.\n2. The authors conducted enough ablation studies to present the contribution of each component of their method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Given G.T. camera intrinsics, the authors first leverage the existing work to obtain the coarse camera poses and depth. Then to refine the estimation, the authors design a module to learn the depth offset estimation with the help of an existing depth estimation network. Furthermore, the camera pose refinement is conducted in another module. The idea of feedforward pose estimation is interesting, but there is still a gap between the performance of the proposed method and some per-scene optimization methods. Since I did not see the authors report any inference time result and I believe some static scene pose-free per-scene optimization methods (CF-3DGS, ..) are very fast and accurate, I expect the authors to provide more comparisons. There are still some questions and limitations raised below. I will consider improving the grade upon the feedback from the authors." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The biggest limitation of this work is the requirement of G.T. camera intrinsic.\n2. The performance on RealEstate-10K seems to be SOTA, however, the performance on ACID is not. Does it mean such a method does not generalize well to the outdoor scenes?\n3. So I expect more comparisons on DL3DV or more public datasets (like DAVIS[1], iPhone[2]) to prove the effectiveness of the proposed method. \n\n\n[1] Pont-Tuset, Jordi, Federico Perazzi, Sergi Caelles, Pablo Arbeláez, Alex Sorkine-Hornung, and Luc Van Gool. \"The 2017 davis challenge on video object segmentation.\" arXiv preprint arXiv:1704.00675 (2017).\n\n[2] Gao, Hang, Ruilong Li, Shubham Tulsiani, Bryan Russell, and Angjoo Kanazawa. \"Monocular dynamic view synthesis: A reality check.\" Advances in Neural Information Processing Systems 35 (2022): 33768-33780." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024pfplat,\ntitle={{PF}3plat: Pose-Free Feed-Forward 3D Gaussian Splatting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3QinqLlMCj},\nnote={under review}\n}" }, "abstract": { "value": "We consider the problem of novel view synthesis from unposed images in a single feed-forward. Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS, where we further extend it to offer a practical solution that relaxes common assumptions such as dense image views, accurate camera poses, and substantial image overlaps. We achieve this through identifying and addressing unique challenges arising from the use of pixel-aligned 3DGS: misaligned 3D Gaussians across different views induce noisy or sparse gradients that destabilize training and hinder convergence, especially when above assumptions are not met. To mitigate this, we employ pre-trained monocular depth estimation and visual correspondence models to achieve coarse alignments of 3D Gaussians. We then introduce lightweight, learnable modules to refine depth and pose estimates from the coarse alignments, improving the quality of 3D reconstruction and novel view synthesis. Furthermore, the refined estimates are leveraged to estimate geometry confidence scores, which assess the reliability of 3D Gaussian centers and condition the prediction of Gaussian parameters accordingly. Extensive evaluations on large-scale real-world datasets demonstrate that PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices. We will make the code and weights publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Generalized Pose-Free Novel View Synthesis", "3D Reconstruction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b7406349ba7326c0cf44fbd77853219a2866edf6.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/42c765b938bab59bb732dea31f21941cf93c66d4.zip" }, "title": { "value": "PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3R9hsn1wAS
MolStitch: Offline Multi-Objective Molecular Optimization with Molecular Stitching
main
Active
molecular optimization;offline optimization;drug discovery
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;5;6;6
4;4;4;4;4
1;2;3;3;3
2;2;3;3;3
1;3;4;3;3
4.6
4
2.4
2.6
2.8
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The comparisons in Table 1 and 2 focus on the hypervolume; it is not clear how the multi-objective nature of the task is being considered here. The potential contributions of MolStitch related to its generative approach is distinct from its potential contributions related to sampling diverse scalarization weights. How is scalarization handled for the baseline methods included in this comparison? \n\nMolStitch is not a method that needs to be applied in a multi-objective context, fundamentally. Have benchmarks been performed on single objective tasks or multi-objective tasks with fixed scalarization weights using performance metrics other than HV? For example, the same top-10 AUC in PMO where some tasks were derived?\n\n(My questions and stated weaknesses are an attempt to clarify the contributions made by this work; there are many combinations of modelling choices that are possible, and despite the inclusion of ablations, my impression is that the rank-based proxy is the largest contributor to performance. Taking this component and integrating it with other modelling choices would help verify or refute this. I acknowledge the empirical results are very strong.)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Both the fully offline and semi-offline optimization settings described in the introduction are of high importance to practitioners.\n\nA large number of methods are included in the comparison, including both “standard” molecular optimization approaches and recently developed approaches for offline learning developed outside of the molecular context. Reported empirical results are strong in terms of mean performance, even if baseline methods may be within one standard deviation.\n\nThe adaptation of the generative model’s loss function from the traditional regression formulation to a DPO-like loss function appears to be novel in this context of molecular design.\n\nThe Appendices are very detailed in explaining related work, the problem setting, anticipated strengths and weaknesses of each approach, and detailing the experiments performed. They will be very educational for readers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces MolStitch as an approach to molecular design that uses a fixed offline dataset to design “stitched molecules”; this is in contrast to the more common iterative molecular optimization approaches that are able to query an oracle. The approach is inspired from trajectory stitching in offline RL. Generated molecules are scored by a proxy model trained to perform pairwise ranking of molecules’ optimalities defined by a scalarized property score. Scalarization weights are sampled from a Dirichlet distribution to achieve diversity along a Pareto front." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The method that seems central to MolStitch---to stich two molecules together and generate a new structure that combines substructures of each---is equivalent to a standard crossover operation in molecular optimization. Indeed, the authors pretrain their model using the rule-based crossover from Jensen’s GraphGA/GraphMCTS. A comparison to a GraphGA employing crossover operations and otherwise using the same ranking proxy model (e.g., for binary tournament selection) is not included.\n\nHowever, the examples in Figure 17 suggest that the generative model, when proposing newly “stitched” molecules, pays very little attention to the two parent molecules. For me, this calls into question the entire premise of “stitching” as opposed to direct optimization of a generative model given a ranking proxy model. Even if the generative model is pretrained on the results of crossover operations, the stitched molecules here have extraordinarily little resemblance to the parent molecules.\n\nThe ablation in Table 3 suggests that the rank-based proxy is critical to performance, yet the other ablations and comparisons seem to lack evaluations using the rank-based proxy in combination with other generative methods besides REINVENT, including a GraphGA as mentioned earlier. Overall, the ablations still leave a murky impression of what aspects of MolStitch represent the most significant improvement over prior work. \n\nRelatively simple concepts for a venue such as ICLR are explained in unnecessary detail. For example, the form of a Dirichlet distribution, a 2-norm regression loss, Pareto optimality, pairwise ranking in Equation 13, and autoregressive token generation. Generic inequality and equality constraints in Equation 1 do not seem to serve a role in the example applications.\n\n[not a score-driving weakness] Proxy model training focuses on pairwise ranking and is initially trained on unlabeled molecules under the assumption that high structural similarity (above a threshold $\\delta$) implies that the objective scores of an unlabeled molecule should match that of the original molecule. Imposing this assumption can be accomplished through means besides pretraining (e.g., use of a Tanimoto kernel in a GP proxy model trained on the original regression task, or simple data augmentation for any proxy model). There is not specific justification for this particular approach, but the empirical results are strong.\n\n[not a score-driving weakness] There is no Related Work section in the *main text* of the manuscript. \n\n[not a score-driving weakness] As a minor point regarding Appendix B.2, Bayesian optimization and scalarization are not mutually exclusive." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N.A." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper's motivation, that is, the problem it aims to solve is indeed a valuable problem in the MOMO scene.\n\n2. The article is clearly written to explain the method, and I can easily understand how each module works.\n\n3. The authors have considered the multi-objective challenge in MOMO. In fact, I agree that this is not an issue that should be ignored because there are too many phenomena in molecular properties that cannot be balanced due to natural conflicts. The existing MOMO methods can just complete this task, but does not consider or solve the problems brought about by multiple objectives." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework called Molecular Stitching to solve the problem of excessive reliance on oracle in existing molecular optimization scenarios. The author proposed the offline MOMO setting and a novel method, so that there is no need to call any oracle for molecular screening during the molecular generation (optimization) process. Experiments have demonstrated the effectiveness of the proposed method in multi-objective optimization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is not very well written. For example, especially in the introduction, I understand the importance of reducing the number of oracle queries, but what is the relationship with the performance of the proxy model that is introduced at great length? In other words, why does low proxy accuracy increase the number of oracle queries? In fact, online oracle calls are not absolutely related to the performance of the proxy model. For example, in DST[1], the proxy model is sufficient to support effective functional group editing. However, due to the limitations of its formulation, DST still needs an oracle to screen and obtain the optimal and more detailed connections between atoms. \n\n2. Motivation is good, but the proposed method does not fit well with motivation. I personally think that the number of oracle queries should take into account the labeled dataset and online generation. Some methods don't need the former, and some don't need the latter. The two parts should not be treated too differently. For example, if a method requires rdkit's online query but it can achieve effective MOMO, why not? If the author can show that the proposed method has an absolute advantage in the evaluation of the sum of the oracle query times of the two parts, then I agree. If plan to do this, please consider adding the oracle query times of the two parts of all baselines. Also, please note that LigGPT[2] also does not need to call oracle in the latter (please correct me if I am wrong).\n\n3. I am very grateful that the author pays attention to the multi-objective problem in MOMO, which is ignored by most baselines. This is because multi-objective optimization itself is a huge challenge. MOMO needs to consider not only the optimization itself, but also how to balance multiple objectives. Although this paper adopts the Pareto solution, this problem has not been analyzed in detail, which is regrettable for the MOMO field. Frankly speaking, this is not a factor I consider when scoring, but I really hope that the author will add some necessary analysis. For example, do the molecules have natural property conflicts? And will MOMO cause gradient conflicts during the optimization process (for specific implementation methods, please refer to [3]). Of course, has the property conflict problem been resolved before and after Pareto was adopted?\n\n4. About experiments. In my opinion, the authors unnecessarily restricted their experiments to offline optimization baselines, as they are not commonly found in MOMO tasks. For example, the ICT method does not seem to be designed for molecules. I suggest the authors add all the baselines mentioned in the DST paper and report the respective numbers of oracle calls during two stages required for all baselines. I would like to see whether the method in this paper has a clear advantage in the number of queries in both stages. Authors should consider reporting average property score (APS), which is a commonly used metric in the MOMO community.\n\n\n[1] Differentiable Scaffolding Tree for Molecular Optimization. ICLR, 2022.\n\n[2] LigGPT: Molecular generation using a transformer-decoder model. 2021.\n\n[3] Pareto Deep Long-Tailed Recognition: A Conflict-Averse Solution. ICLR, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. how the molecules in the initial training set are clipped, and by what means are the clipped sites determined.\n2. the fine-tuning model uses newly generated molecules, so must the properties of the newly generated molecules be better than the previous ones, and how much noise exists in them if they are still passed through the predictor?\n3. the authors are deciding the dataset for fine-tuning by ranking, so how accurate is this ranking model and does it have the ability to generalize?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors propose that different molecules possess different properties that the differences in properties depend on the structure or functional groups of the parts, and that new molecules with a full range of properties can be obtained when the 2-part structure is spliced. The motivation is clear and novel, and it is interesting to apply it in the direction of direct property optimization. The overall logic of the article is clearer." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose an offline molecular multiobjective optimization algorithm, MolStitch, which can be made independent of querying oracle function by means of Direct property optimization. It is proposed that the properties of a molecule depend on the partial structure, and the viewpoint of multiple properties can be obtained when the property structures are spliced with each other." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The theory proposed by the authors is also somewhat problematic because the properties of a molecule can be determined by more than just a particular section of the structure and functional group, just for example, atoms may determine acidity and alkalinity, functional groups determine hydrophobicity, and the overall structure in turn determines properties such as boiling point. If the authors can explain clearly how the functionality or structure is clipped between different molecules, and if it is still by querying the oracle function, how accurate and generalized is this predictor of the query, and does it need a different predictor for different properties, this needs to be further discussed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weaknesses" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The proposed MolStitch only uses existing offline data and does need to query the oracle function.\n- The overall framework of MolStitch is novel. The figure also presents the framework clearly.\n- MolStitch outperforms baselines on several benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies an important problem, multi-objective molecular optimization. This paper mainly focuses on the offline setting. The authors propose Molecular Stitching (MolStitch), which leverages existing molecules from the offline dataset to generate stitched molecules and uses these generated samples to fine-tune generative models. Experimental results on various offline multi-objective molecular optimization problems validate the effectiveness of MolStitch." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- For fine-tuning the StitchNet, the new stitched molecule is not guaranteed to keep the desired properties.\n- Since the StitchNet and generative model is pre-trained on large-scale ZINC dataset. Is it unfair to compare to other models that are not pre-trained? Is it possible to choose several baselines and use the same backbone network for comparison?\n- Related work should be included in the main text. While the space is limited, it is important to keep this part. Some important multi-objective references are also missing. For example, RL-based [1] and GFlowNet-based [2]. I recommend the authors add a brief related work in the main text and move some less important descriptions to the appendix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Docking Score as a Property (Line 353): Could you clarify why docking score optimization is presented as a separate task from molecular property optimization? Isn’t docking score simply another molecular property? Further explanation would clarify the differences between the objectives of each task.\n\n- Offline Use of Molecular Optimization Methods: How were molecular optimization methods such as REINVENT adapted for offline settings? Some of these methods typically operate in online or iterative contexts. Additional details on any modifications would help assess the effectiveness of these methods in the offline scenario.\n\n- Details on REINVENT-BO: The method REINVENT-BO doesn’t seem to align with the reference for Austin et al. Could you provide more information on this method and clarify the implementation used in this paper?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- Significance of Offline Molecular Optimization: Offline optimization is relatively less explored in the field of molecular generation, which is dominated by online oracle-based approaches. Offline optimization presents a fundamentally more challenging problem because it requires generating high-quality molecules from a static dataset without iterative property evaluation, which is especially relevant for applications like drug discovery where experimental validation is costly.\n- Performance on Reported Metrics: The experimental results demonstrate that MolStitch achieves superior performance compared to existing methods in multi-objective molecular optimization. The rank-based proxy, StitchNet’s design for combining molecular fragments, and priority sampling collectively allow MolStitch to explore a broader space of candidate molecules, as evidenced by the results on standard metrics such as hypervolume and R2 indicator." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework, MolStitch, for generating molecules with desirable properties using only offline datasets. MolStitch works by “stitching” fragments from existing molecules in an offline dataset to create new molecules that combine multiple desired properties in a single structure. This approach leverages StitchNet, a neural network trained to produce valid molecular combinations, alongside a rank-based proxy model that assesses molecule quality through pairwise comparisons, which is claimed to enhance stability in multi-objective tasks. Experimental results indicate that MolStitch consistently outperforms existing methods in various offline molecular optimization benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Novelty and First Contribution Claim: The claimed novelty of molecular “stitching” may be overstated. Similar approaches, such as the methods presented by Jin et al. [1] and Guo et al. [2], also leverage substructure-based techniques for molecule generation, where certain fragments are prioritized for their property contributions and tested in multi-objective settings. The authors should review the relevant work more closely and moderate their claims on being the first to introduce this method.\n\n- Choice of Baselines and Comparison Gaps: The separation of molecular optimization and model-based optimization (MBO) methods might be artificial, as molecular optimization techniques can often be integrated within MBO frameworks. This potential integration creates strong baselines that are essential for a comprehensive evaluation. Including such baselines would make the comparison more robust and clarify MolStitch’s performance relative to an enhanced baseline that combines MBO with molecular optimization approaches.\n\n- Clarity and Completeness: The paper lacks some necessary details, particularly about the experiment setup and baseline methods. Greater clarity on the exact experiment design and further elaboration on baseline choices would make the methods and results more reproducible and transparent for readers.\n\n### Reference\n[1] Jin, Wengong, Regina Barzilay, and Tommi Jaakkola. \"Multi-objective molecule generation using interpretable substructures.\" International conference on machine learning. PMLR, 2020.\n[2] Guo, Minghao, et al. \"Data-efficient graph grammar learning for molecular generation.\" arXiv preprint arXiv:2203.08031 (2022)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel framework that stitches molecules from an offline dataset to fine-tune the generative model." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024molstitch,\ntitle={MolStitch: Offline Multi-Objective Molecular Optimization with Molecular Stitching},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3R9hsn1wAS},\nnote={under review}\n}" }, "abstract": { "value": "Molecular discovery is essential for advancing various scientific fields by generating novel molecules with desirable properties. This process is naturally a multi-objective optimization problem, as it must balance multiple molecular properties simultaneously. Although numerous methods have been developed to address this problem, most rely on online settings that repeatedly evaluate candidate molecules through oracle queries. However, in practical applications, online settings may not be feasible due to the extensive time and resources required for each oracle query. To fill this gap, we propose the Molecular Stitching (MolStitch) framework, which utilizes a fixed offline dataset to explore and optimize molecules without the need for repeated oracle queries. Specifically, MolStitch leverages existing molecules from the offline dataset to generate novel `stitched molecules' that combine their desirable properties. These stitched molecules are then used as training samples to fine-tune the generative model, enhancing its ability to produce superior molecules beyond those in the offline dataset. Experimental results on various offline multi-objective molecular optimization problems validate the effectiveness of MolStitch. MolStitch has been thoroughly analyzed, and its source code is available online." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "molecular optimization", "offline optimization", "drug discovery" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b68c628944b345b2560bc3f2701984ff3c41faf9.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MolStitch: Offline Multi-Objective Molecular Optimization with Molecular Stitching" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3RLxccFPHz
An Intelligent Agentic System for Complex Image Restoration Problems
main
Active
image resotration;low-level vision;agent;large language model;vision language model
applications to computer vision, audio, language, and other modalities
5;6;6;8
4;4;5;3
2;3;4;3
1;3;3;3
3;3;4;3
6.25
4
3
2.5
3.25
-0.648886
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How are the discovered workflows similar to the original motivation of following the human workflow? For instance, is the subtask scheduling by GPT-4 (w. experience) match the best practices performed by a human? It would be better if the authors could provide more insights or discussions.\n\n2. How does the proposed model perform when there is only a single type of degradation? Does it also perform competitively?\n\n3. What is the criterion for deciding whether Execution step is Failed or Successful? (I might have missed)" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. Clear presentation of the benefits of the proposed methodologies. I especially liked Figure 3, 6, 7, and 8, where the authors show dramatic improvements on why choosing a good scheduler for the image restoration components is important, as well as reflection and rollback.\n\n2. Novelty in connecting the human process of IR with LLM-based agentic systems. Though the idea of mimicking the human workflow is being widely adopted more recently, application to image restoration tasks and showing effectiveness is not demonstrated before, to my knowledge.\n\n3. Thorough justification of the design choices and careful experiment designs. Reasons for the proposed workflow and the capabilities the authors are trying to give to the LLM/VLMs are well described, and the evaluations seem to be fairly performed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an agentic workflow based on LLM/VLMs for image restoration. The agentic system follows how actual humans would process images, consisting of five stages: Perception, Scheduling, Execution, Reflection, and Rescheduling. Since the existing VLMs are not sufficiently capable of analyzing image quality or reasoning about the order of image processing tasks, the VLMs are finetuned and allowed for (self-)exploration to understand the effects of scheduling the image restoration components. Experimental results clearly demonstrate the effects of scheduling with learned experience and the other proposed components." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. No cost analysis. Using such agentic systems require numerous requests to the LLM/VLM APIs; if the system chooses to perform \"Reflection\" with the tree search, the worst case scenario would be extremely costly. Compared to the existing image restoration models, the proposed model uses significantly more compute. In this sense, given that many previous works (roughly) match the FLOPs when comparing the restoration quality, one might argue that the comparisons are unfair.\n\n2. Relatively subtle improvements for quantitative metrics (though qualitative improvements look quite significant). I would suggest also adding quantitative measures on the figures so that the readers can compare both aspects with a single glance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed system offers a comprehensive approach to image restoration, addressing a broad range of degradation types through a structured, adaptable methodology.\n2. It incorporates human-interaction-inspired insights into the image restoration process, potentially enhancing adaptability and effectiveness in handling complex restoration tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The proposed AgenticIR system addresses the inherent complexity of real-world image restoration (IR) tasks by emulating a human-like, multi-stage processing workflow. The system operates through distinct phases: Perception, Scheduling, Execution, Reflection, and Rescheduling. It integrates Large Language Models (LLMs) and Vision Language Models (VLMs) into a collaborative framework, allowing text-based interactions to direct the application of specialized IR models. This agentic approach builds on existing multi-modal IR frameworks by dynamically adapting its restoration strategy, actively reassessing and adjusting to handle various complex degradations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Comparison fairness**: The comparative experiments appear to lack fairness, as the baselines (e.g., InstructIR) are designed as unified IR models trained to handle multiple degradation types in a single framework. In contrast, AgenticIR leverages specialized off-the-shelf models for each type of degradation. Therefore, it would be more appropriate to compare AgenticIR to the state-of-the-art models for each specific degradation task rather than to unified restoration models.\n \n2. **Efficiency Concerns**: While the system is comprehensive, its workflow is notably complex and lengthy. Compared to regular image restoration models, how efficient is AgenticIR in processing images? This is a critical aspect for real-world applications of image restoration and should be addressed with precise comparative evaluations.\n\n3. **Toolbox Ablation Study**: Lines 159-160 state, \"For each degradation, we collect three to six advanced models to build the ‘toolbox’ that the intelligent agent can use.\" There is no ablation study analyzing the impact of selecting these advanced models on the system’s effectiveness. Understanding the influence of each selected model in the toolbox could provide valuable insights.\n\n4. **GPT-4 Usage and Reproducibility**: AgenticIR uses GPT-4, but GPT-4 lacks a fixed version, which raises concerns about the reproducibility of experimental results. Additionally, there is no ablation study on the effects of using alternative LLMs, particularly open-source options, on performance outcomes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-\tThis paper is well-written and easy to follow.\n-\tThe experimental setup is comprehensive, with sufficient ablation and comparative experiments demonstrating the effectiveness of their proposed methods\n-\tThe discovery that execution order is key to restoring complex degraded images is compelling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To address the complex image restoration(IR) problems in real-world scenarios, the authors propose AgenticIR, which is an agent-based system that comprises five stages: perception, scheduling, execution, reflection, and rescheduling. The system leverages a Vision-Language Model (VLM) and a Large Language Model (LLM) to reason and connect these five stages, utilizing an IR toolbox during the execution phase to perform actual image reconstruction.\n\nThe three main elements in this process are the LLM, VLM, and the IR toolbox. The VLM primarily analyzes the image quality and summarizes the degradation issues, fine-tuning its capabilities based on existing work. The LLM is responsible for planning and strategizing based on the VLM's results, utilizing GPT-4 and employing a self-exploration method to gain experience with IR problems. The IR toolbox consists of a combination of 3-6 existing models tailored to each type of image degradation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tThe main concern is that this work resembles several widely-used frameworks (e.g., large language models (LLMs), vision-language models (VLMs) and image restoration models), giving it a predominantly engineering-focused approach. \n-\tAdditionally, this work appears complex, so providing statistics on the time and complexity involved in a single inference would enhance clarity.\n-\tGiven that LLMs and VLMs often struggle with the issue of 'hallucination,' does this work encounter a similar challenge? If so, how does it address this problem?\n-\tWhat are the limitations of the proposed framework?\n-\tRegarding the point that 'execution order is crucial', are the documents (knowledge) remain consistent during inference across different test samples?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In addition to the weaknesses mentioned, I have two further questions:\n\n1. Why Not Use VLMs Exclusively Throughout the Pipeline? Given VLMs' strong capabilities in image quality assessment and reasoning, could a VLM-only approach be more efficient or effective for the entire pipeline?\n2. Would Online Updates to the Reference Data Benefit the Pipeline? Could implementing real-time updates to the experiential knowledge base further enhance the pipeline’s adaptability and performance?\n\nI hope the authors can make up for the weaknesses mentioned and address these questions." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Human-Centric Design: AgenticIR provides an image restoration approach that mirrors human actions, incorporating processes like reflection and iterative rescheduling into its pipeline. This design enhances action interpretability and facilitates meaningful human interaction with the system.\n2. Clear and Concise Expression: The paper presents complex ideas with clarity, accompanied by detailed images and diagrams that enhance comprehension and support the technical explanations.\n3. Comprehensive Experiments and Ablation Studies: Thorough experimental evaluations are provided, with well-structured ablation studies for each module. This approach effectively validates the system's design and performance.\n4. Illustrative Pipeline Examples: The pipeline is illustrated with specific cases, offering a clear understanding of how each component functions within real-world scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces AgenticIR, an intelligent system designed to handle complex image restoration tasks by emulating human-like problem-solving methods. The system operates through five stages: Perception, Scheduling, Execution, Reflection, and Rescheduling. AgenticIR leverages LLM and VLM, using their text generation capabilities to operate a set of IR tools dynamically. It relies on VLMs for image quality assessment and LLMs for step-by-step reasoning, enhancing its adaptability to various IR challenges.\n\nIt also incorporates a self-exploration mechanism that generates referenceable summaries from past restoration attempts, which improves its decision-making. Experimental results show AgenticIR’s effectiveness in complex restoration scenarios, highlighting its potential for real-world automated image processing and broader AI applications in visual processing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limitations Compared to Optimal Solutions: AgenticIR tends to encounter an early stopping issue, where it may settle on a satisfactory solution prematurely, halting further exploration and potentially missing the optimal outcome. Addressing this limitation is important, and it might be beneficial to add an additional row to Table 4 to reflect this aspect.\n2. Insufficient Reporting on Iteration Count and Processing Time: Although the paper emphasizes the role of experiential information and provides illustrative examples, it lacks concrete data on the actual reduction in iterations or time consumption. Including specific metrics on these improvements would strengthen the evaluation of AgenticIR’s efficiency and practical advantages." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes a LLM-based agentic system that utilize various tools for complex image restoration problems." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024an,\ntitle={An Intelligent Agentic System for Complex Image Restoration Problems},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3RLxccFPHz},\nnote={under review}\n}" }, "abstract": { "value": "Real-world image restoration (IR) is inherently complex and often requires combining multiple specialized models to address diverse degradations. Inspired by human problem-solving, we propose AgenticIR, an agentic system that mimics the human approach to image processing by following five key stages: Perception, Scheduling, Execution, Reflection, and Rescheduling. AgenticIR leverages large language models (LLMs) and vision-language models (VLMs) that interact via text generation to dynamically operate a toolbox of IR models. We fine-tune VLMs for image quality analysis and employ LLMs for reasoning, guiding the system step by step. To compensate for LLMs' lack of specific IR knowledge and experience, we introduce a self-exploration method, allowing the LLM to observe and summarize restoration results into referenceable documents. Experiments demonstrate AgenticIR's potential in handling complex IR tasks, representing a promising path toward achieving general intelligence in visual processing." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "image resotration", "low-level vision", "agent", "large language model", "vision language model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/82158759b58413f37165e088bf77571ff6c2005c.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "An Intelligent Agentic System for Complex Image Restoration Problems" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3RSLW9YSgk
Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination
main
Active
World model; Imagination; Imitation Learning; Gaussian Splatting; Compositional; Physics-informed; Object-centric;
applications to robotics, autonomy, planning
5;6;8
3;4;4
2;3;3
2;3;3
2;4;2
6.333333
3.666667
2.666667
2.666667
2.666667
0.755929
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Figure 2: It would be helpful to include how “open-vocabulary tracking models” fit into the pipeline. \n\nL107: “Two recent works demonstrated predicting future states can be applied to robotics” - This statement can be made more precise, since predicting future states / modeling forward dynamics for control is not a new idea in robotics.\n\nL128: Why is (Cheng et al., 2023) used as a reference for the zero-shot capabilities of foundation models?\n\nL157: There is some overloading of the term “manipulation”, which in the context of the paper seems to refer to a “manipulable” or controllable world model, rather than a world model explicitly designed for robot manipulation tasks. \n\nL174: Set $\\mathcal{X}$ notation is used for a sequence.\n\nL267-268: Other objects in the world that the robot arm may interact with could also be articulated? ie. cabinets \n\nL409: Is the PerAct baseline in the multi-task setting only trained on the subset of RLBench tasks you’re evaluating on? How many demonstrations were used? Was PerAct trained with data augmentation?\n\nL412-413: Was this model selection using a validation set done over the entire course of training? How does this compare to just using the final model after training for 600k steps?\n\nL418: Why “112, 61, and 81” demonstrations for the three tasks? How were these number of “imagined” demonstrations chosen? \n\nL485: How were OOD test conditions chosen? Would they be within the set of equivariant transformations evaluated in Table 2? \n\n[minor editing]\n\nFigure 2: part 3, typo in Gaussian\n\nL105: “gaussian” -> “Gaussian”\n\nL233: $x_{n,k}$ should be $y$?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper tackles the data efficient regime, and their proposed approach uses a single demonstration\n\n- The paper validates the proposed approach with real-world experiments, and demonstrates a system that can perform manipulation tasks" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes DreMa, which integrates object-centric Gaussian splatting with a rigid-body physics simulator, to “imagine” new demonstrations to train imitation learning models. These imagined demonstrations are obtained in simulation by applying robot transformations and rotations to objects extracted from Gaussian splatting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main issue with the paper is that the work is positioned within the “world model” literature, while the proposed method seems to fall under real2sim and data augmentation strategies. The “world model” is used to generate an augmented set of demonstrations in simulation in order to train an imitation learning model offline, and it is not used during online control. The set of equivariant transformations used to generate the augmented demo set is hand-designed, and likely task-specific. \n\nThe simulated results would be more convincing if they were expanded to include more than 3 RLBench tasks, but the method is limited to non-articulated objects. Additionally, the real-world tasks seem to only include blocks or boxes. How does DreMa perform with more complex shapes? How does DreMa perform when the scene includes a mix of objects with different shapes? \n\n- It is unclear what base imitation learning model is used by the proposed DreMa method. Is this just PerAct? Are observations to the policy directly captured from cameras, or by rendering Gaussian splats?\n\n- There are no metrics on runtime performance of the proposed method, while the introduction mentions the “real-time” performance of Gaussian splatting. \n\n- Table 1: Comparisons to PerAct (Shridhar et al., 2023) are done using only 5 episodes per task, while PerAct uses 25 episodes per task. \n\n- The related work section would benefit from a broader discussion of particle-based simulation approaches, such as: https://arxiv.org/abs/1810.01566, https://arxiv.org/abs/2312.05359, https://arxiv.org/abs/2405.01044, https://arxiv.org/abs/2303.05512. A comparison to ManiGaussian (https://arxiv.org/abs/2403.08321) would also be helpful." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. What’s the relationship with digital twin line of work?\n2. How to learn 3d models for parts of articulated objects and how to imagine demonstrations that manipulate articulated objects?\n3. Can you replay imagined trajectories in simulator to verify correctness? Will that help improve imitation success?\n4. How does error build up in the pipeline of segmenting object masks -> learning objects models through Gaussian Spatting -> imagine demonstrations -> learning policy? Perhaps some ablations or quantitative metrics to measure error will be useful! \n5. Can you add one more baseline method of data augmentation to compare to?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents a novel strategy of data augmentation by first acquiring object models and then leveraging simulations to ensure correct dynamics of imagined demonstrations instead of learning worlds models of both objects and dynamics simultaneously. The strategy is shown to have meaningful improvement on few-shot imitation performance in sufficient sim and real tasks. The approach is also thoroughly invested with ablations showing the significance of the imaging with roto-translation of original demos. The paper is well written with clear motivations and goals and sufficient results to support the claim." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper augments a small set of demonstrations with imagination to improve few-shot imitation learning in both RL bench and real-world robotic tasks. The imagination comes from learning compositional objects models through Gaussian Spatting and replaying demonstrations with varied objects poses in a simulator." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper should compare to other data augmentation approaches such as MimicGen or Digital Twin, which is significant extensive line of work worthy of more elaboration and discussion. Similar approach such as [1] should be discussed or cited. \n2. The method relies on open-vocabulary tracking models to segment objects, which limit the approach to non-articulated objects. It is unclear how such segmentation models can capture individual robot link or object parts connected with articulated joints accurately. Also, it is unclear how to extend the simulator to incorporate articulated objects interaction after the parts have been learned. \n3. Existing approach of verify imagined demonstration is rudimentary. \n\n[1] MIRA: Mental Imagery for Robotic Affordances" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could you please show some performance results in more complex physical environments and challenging tasks? Even if they were unsuccessful, it would be helpful to see such results, even though they are not included in the paper." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces an innovative approach by leveraging world models to generate robotic data for imitation learning, which is a contribution to the field.\n2. The experiments are detailed, covering both simulation environments and real-world robot demonstrations, providing a robust evaluation of the approach.\n3. A creative method for augmenting data used in imitation learning is introduced, which could lead to improved learning efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel paradigm for constructing world models that serve as explicit representations of real-world environments and their dynamics. By integrating advances in real-time photorealism, such as Gaussian Splatting, with physics simulators, the authors propose a system capable of generating new data for imitation learning. Additionally, the paper demonstrates the application of this model in real-world scenarios, showing how data collected from the world model can be used to train robots via imitation learning, with promising results when transferring learned behaviors to real-world tasks.\n\n**Strengths:**\n\n1. The paper introduces an innovative approach by leveraging world models to generate robotic data for imitation learning, which is a contribution to the field.\n2. The experiments are detailed, covering both simulation environments and real-world robot demonstrations, providing a robust evaluation of the approach.\n3. A creative method for augmenting data used in imitation learning is introduced, which could lead to improved learning efficiency.\n\n**Weaknesses:**\n\n1. The absence of publicly available source code limits the reproducibility of the results. It is suggested to release the code during the rebuttal stage.\n2. Some figures in the paper need improvement, as the text in several instances is too small to read clearly.\n3. The predictions demonstrated in the paper are limited to simple tasks and physics environments, and future work should focus on extending these predictions to more challenging tasks and complex physical simulations.\n\nIn conclusion, the paper presents a compelling framework that blends world modeling with imitation learning, but there are areas for improvement, particularly in terms of figure clarity, task complexity, and providing source code for reproducibility." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The absence of publicly available source code limits the reproducibility of the results. It is suggested to release the code during the rebuttal stage.\n2. Some figures in the paper need improvement, as the text in several instances is too small to read clearly.\n3. The predictions demonstrated in the paper are limited to simple tasks and physics environments, and future work should focus on extending these predictions to more challenging tasks and complex physical simulations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dream,\ntitle={Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3RSLW9YSgk},\nnote={under review}\n}" }, "abstract": { "value": "A world model provides an agent with a representation of its environment, enabling it to predict the causal consequences of its actions. Current world models typically cannot directly and explicitly imitate the actual environment in front of a robot, often resulting in unrealistic behaviors and hallucinations that make them unsuitable for real-world applications. In this paper, we introduce a new paradigm for constructing world models that are explicit representations of the real world and its dynamics. By integrating cutting-edge advances in real-time photorealism with Gaussian Splatting and physics simulators, we propose the first compositional manipulation world model, which we call DreMa. DreMa replicates the observed world and its dynamics, allowing it to imagine novel configurations of objects and predict the future consequences of robot actions. We leverage this capability to generate new data for imitation learning by applying equivariant\ntransformations to a small set of demonstrations. Our evaluations across various settings demonstrate significant improvements in both accuracy and robustness by incrementing actions and object distributions, reducing the data needed to learn a policy and improving the generalization of the agents. As a highlight, we show that a real Franka Emika Panda robot, powered by DreMa ’s imagination, can\nsuccessfully learn novel physical tasks from just a single example per task variation (one-shot policy learning). Our project page and source code can be found in: https://dreamtomanipulate.github.io/DreMa/." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "World model; Imagination; Imitation Learning; Gaussian Splatting; Compositional; Physics-informed; Object-centric;" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/cb7e2af1fdc15061ac3435efd41700901924ac10.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3RcztSIHiA
PDE-GAN for solving PDE optimal control problems more accurately and efficiently
main
Active
Optimal control;deep learing;PINNs;GANs
other topics in machine learning (i.e., none of the above)
3;3;3;5;5
5;3;4;4;4
2;2;1;3;2
2;2;2;2;3
3;1;1;1;2
3.8
4
2
2.2
1.6
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In Algorithm 1, why do you just limit the number of epochs 500? \n\nIt seems that the algorithm updates the generator and discriminator together without any condition, why?\n\nHow do you properly set Bound1 and Bound2?\n\nTable 2 shows the running time for PDE-GAN, which is the total? the mean? Does it include the training time for GAN?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The integration of PINNs into the GAN framework is a new approach for solving PDEOC problems. This allows to use two additional\ndiscriminator networks to adaptively adjust the loss function, allowing for the adjustment of weights between different competing loss terms. Compared to Soft-PINNs and Hard-PINNs, PDE-GAN can find the optimal control without the need for cumbersome line search, offering a more flexible structure, higher efficiency, and greater accuracy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel method PDE-GAN, which integrates PINNs into the GANs framework to solve the PDEOC problems. The authors address the limitations of traditional PINN approaches in balancing competing loss terms and reducing computational time, particularly by eliminating the need for exhaustive line search in weight tuning. They validate their method on four representative PDEOC problems, including linear and nonlinear PDEs, and various types of control (boundary, spatio-temporal domain, and time-domain distributed equations) and compared with soft-PINNs and hard-PINNs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper lacks a theoretical analysis explaining why integrating PINNs into a GAN framework results in improved performance. Theoretical insights or proofs would strengthen the paper, espeically without any line search, the comprehensive evaluations of the results could be beneficial, however, using the experimental results to address its advantages is the main weakness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Why do we need two generators and two discriminators, what are their respective purposes?\n2. What unit is used in Table 2? That is, are results presented in wallclock time? Such information is relevant for anyone to make a fair comparison to this work.\n3. Have the authors considered comparing their algorithm to classical techniques (instead of only PINNs)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper proposes a new method for PDE-constrained control problems and demonstrates its superiority to PINNs in several numerical experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to combine PDE constraints with generative adversarial training as a method to solve PDE optimal control problems. The method is a GAN-based analogue to PINNs and outperforms the latter in some numerical experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper is far from well-written. First, it contains many grammatical and spelling errors that distract from the overall contribution. Beyond this, the writing (especially in Section 3) is unclear and it is difficult to understand the authors' reasoning.\n\nIn addition, this paper lacks any comparison to classical techniques for solving PDE-constrained optimal control problems. The proposed method is only compared to PINNs, but PINNs are not exactly state-of-the-art methods and can be quite easy to beat in many circumstances. As such, I am not convinced that PDE-GAN is the best method for solving these problems." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* Why would the adversarial loss help for solving PDE optimal control problems ?\n* Have the authors tried other techniques to try balancing the two losses ?\n* Have the authors tried without the hard constraints ?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* The method seems to work and obtains good experimental results on the different problems.\n* The overall running time is less than of the Soft-PINNs and Hard-PINNs baselines when linear search is taken into account." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces PDE-GAN a framework for solving PDEs optimal control problem with a PINN and an adversarial loss. The framework is an extension of the hard-constraint PINNs, which imposes constraints directly on the PINN solution instead of enforcing them through loss penalties. In the optimal control configuration, the pde and cost objectives are balanced by a weight $w$. Existing implementations require for a search of the best $w$ to find a compromise between the two loss terms. The adversarial loss aims at mitigating the need for searching for the optimal weight value. The authors conducted experiments on Laplace and Burgers equation with several control setups." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The motivations of the paper are not well-founded to my view. The authors do not explain why the adversarial approach is needed to balance the different loss terms and never discuss nor test possible alternatives.\n* The paper has some writing issues and suffers from a lack of clarity. Sections 3.1 and 3.2 should be within a separate background section. The notations introduced in Section 3.3 are difficult to read, especially RHS and LHS which are not explicitly detailed. I suggest using several examples to improve clarity.\n* The running time is greater than that of a single PINNs. \n* The importance of linear search for the other methods is not explained properly.\n* The results are only marginally better than Hard-PINNs except for the second equation.\n* The authors do not discuss their architectural choices, especially the adversarial loss and the noise injection.\n* I suggest changing the name of the framework to Adversarial-PINNs as it is more faithful to the core idea fo the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Originality: The use of GANs under the framework of PINNs is interesting.\n- Significance: The problem tackled in this paper is inherently challenging due to the complexity of solving inverse problems under strict physical constraints. The authors’ approach demonstrates a promising direction in addressing these difficulties effectively." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a GAN-based approach to address the dual optimization problem for solving PDEs with an unknown control function. The method integrates PINNs-like objective functions and loss structures, targeting both forward and inverse problems. In this setup, the generators are tasked with predicting both the control function and the corresponding solution function. Meanwhile, the discriminators are designed to differentiate between valid solutions and zero-valued outputs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Choice of the Discriminator:\n - The current approach computes discriminators in a point-by-point manner. However, in traditional settings with discrete images, the entire image is typically used as input instead of individual pixels. The authors should provide a clear rationale and experimental results for this design choice.\n\n- Lack of Comprehensive Baseline Comparison:\n - The paper lacks a comparative analysis with relevant methods such as bi-level optimization techniques. While these methods are mentioned in the related work, the absence of a thorough experimental comparison is not adequately justified.\n - Furthermore, there is no comparison with existing approaches like Physics-informed DeepONet (Wang et al., 2021), which address similar challenges. A direct comparison would help contextualize the proposed method’s performance in relation to established approaches exploring similar ideas.\n\n- Complexity of Addressed Problems:\n - The paper does not sufficiently communicate the complexity or importance of the problems it addresses, making it challenging for readers to assess the novelty and significance of the proposed solution.\n - For example, Mowlavi and Nabi (2022) explore a range of equations, from simpler Laplace problems to more complex 2D Navier-Stokes equations, in their study of PDE-based optimal control (PDEOC). Including results for similarly challenging equations in this work would strengthen the paper’s validation and impact.\n\n- Readability and Clarity:\n - The submission requires revisions to enhance readability and clearly communicate the main ideas. Key areas for improvement include:\n - Unifying the notations for the generator, solution function, and control function.\n - Organizing and presenting the definitions of different components in a clearer and more cohesive manner.\n\nReferences:\n- Mowlavi and Nabi. (2022). *Optimal control of PDEs using physics-informed neural networks.*\n- Wang et al. (2021). *Learning the solution operator of parametric partial differential equations with physics-informed DeepONets.*" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The symbol of weight $w$ in line 97 is not consistent with equation (3). \n2. The main point of this work is to remove the hand-picking of the penalty parameter $w$. $w$ is just one hyperparameter, but the discriminator is a network, and it has a lot of hyperparameters, such as the depth and the width. Moreover, PDE-GAN (the proposed method in this manuscript) needs four discriminators. Tuning one hyperparameter is easier than many hyperparameters. \n3. Also, did you try to set $w$ to be learned? \n4. As I said above, this work only considers the equality constraints, but the inequality constraints are common in practice. How do you generalize the proposed approach to more complex constraints? \n5. The numerical experiments did not show the stability of PDE-GAN. Can you show the whole results (e.g., the loss during training) of $D_c$ and $D_u$ to demonstrate the stability of PDE-GAN?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper is not difficult to follow. The proposed method uses the PINN framework to solve the parametric optimal control problems, which can be used to solve high-dimensional problems. The training style is inspired by GAN. Based on such training style, the different terms in the loss can be balanced without tuning by hand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The aim of this work is to use neural networks to solve PDE-constrained optimal control problems. The main contribution of this work is to introduce the GAN style to train the PINN to solve optimal control problems. The GAN style to train PINN is the previous work." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The PDE-constrained optimal control problems considered in this work only involve the equality constraint, but in practice, the inequality constraints are typical, e.g., the box constraint. \n\n2. As stated above, if there exist inequality constraints, the proposed method in this manuscript cannot be applied directly. There are some literature that have already resolved this issue, but this manuscript did not mention, e.g., P. Yin, G. Xiao, K. Tang, and C. Yang, AONN: An adjoint neural network method for all-at-once solutions of parametric optimal control problems, SIAM Journal on Scientific Computing (2024). In this literature, the authors handle more general parametric optimal control problems with complex constraints. Since the AONN inherits the scheme of direct adjoint looping, it does not require tuning the penalty parameter. At the very least, the author should discuss AONN in related work because its key point has a strong correlation with this manuscript. \n\n3. The experimental results are not so convincing. The loss behavior during training is not shown. Only the final error is reported. However, the training procedure of GAN is unstable. It is hard to say that the performance is better than the baseline." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Using a Generative Adversarial Network (GAN) to train a Physics-Informed Neural Network (PINN) to solve the problem of weight selection in PDE optimal control problems." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024pdegan,\ntitle={{PDE}-{GAN} for solving {PDE} optimal control problems more accurately and efficiently},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3RcztSIHiA},\nnote={under review}\n}" }, "abstract": { "value": "PDE optimal control (PDEOC) problems aim to optimize the performance of physical systems constrained by partial differential equations (PDEs) to achieve desired characteristics. Such problems frequently appear in scientific discoveries and are of huge engineering importance. Physics-informed neural networks (PINNs) are recently proposed to solve PDEOC problems, but it may fail to balance the different competing loss terms in such problems. Our work proposes PDE-GAN, a novel approach that puts PINNs in the framework of generative adversarial networks (GANs) “learn the loss function” to address the trade-off between the different competing loss terms effectively. We conducted detailed and comprehensive experiments to compare PDE-GANs with vanilla PINNs in solving four typical and representative PDEOC problems, namely, (1) boundary control on Laplace Equation, (2) time-dependent distributed control on Inviscous Burgers' Equation, (3) initial value control on Burgers' Equation with Viscosity, and (4) time-space-dependent distributed control on Burgers' Equation with Viscosity. Strong numerical evidence supports the PDE-GAN that it achieves the highest accuracy and shortest computation time without the need of line search which is necessary for vanilla PINNs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Optimal control", "deep learing", "PINNs", "GANs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/665bdeb29620710d7a202579a39b1ae431aef540.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/158ee1744cfa627d3d9e84db1fc841d8d5782c1c.pdf" }, "title": { "value": "PDE-GAN for solving PDE optimal control problems more accurately and efficiently" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3RrNfVWodl
LOCAL: Latent Orthonormal Contrastive Learning for Paired Images
main
Active
paired images;representation learning;supervised contrastive learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;5;6;6
4;3;3;5
2;2;3;3
2;2;2;3
2;2;3;3
5.5
3.75
2.5
2.25
2.5
0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Please elaborate on the procedure in the single image as sample benchmark experiment in section 4.2 and Table 6 as the models discussed are left ambiguous. Is the single image fed through the same model as described by Figure 6 (for OCL) and Figure 8 (for SCL)?\n\n2. On all experiments in 4.1, the smallest batch size is 8. Please clarify why 8 is this minimum batch size appropriate for evaluation?\n\n3. Discussion in 3.3 suggests a batch size large enough to enable the representation for different classes to become orthogonal is sufficient for OCL to attain a minimum. Is there a lower bound on minimum batch size (theoretically or empirically)?\n\n4. Please compare with 'Targeted Supervised Contrastive Learning for Long-Tailed Recognition,' which provides a better baseline for addressing data imbalance in SCL." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Well-illustrated geometric figures in the problem statement and motivation sections for the OCL and LOCAL models.\n\n2. The proposed OCL is supported by a theoretical analysis demonstrating that it has a lower bound and attains its minimum without contingency on data balance unlike SCL.\n\n3. Experimental results show consistent improvement upon the evaluation tasks compared to SCL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new method, Latent Orthogonal Contrastive Learning (LOCAL), for supervised contrastive learning by introducing a novel orthonormal contrastive loss, which enforces negative samples to be perpendicular to the anchor in the embedding space. This approach addresses the challenges of imbalanced classes and high computational load encountered in previous supervised contrastive learning methods when evaluated on two different pre- and post-disaster satellite image datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are no toy experimental examples where OCL successfully optimizes but SCL definitively has an embedding drift caused by a cyclical collapse, as described in section 2 and the discussion in 3.3.\n\n2. The HRA dataset is not cited but also is not presented as an original contribution thereby lacking sufficient context information comparable to the xBD dataset.\n\n3. Conclusion claims to test resultant embeddings on natural language inference, but no experiments refer to natural language inference." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How large is a high-resolution remote sensing image? As the reviewer knows, the size of high-resolution remote sensing images has exceeded 1000 or even larger. If applicable, can the authors discuss how their method scales to larger image sizes (e.g., >1000*1000 pixels) that are common in remote sensing?\n2. Can the proposed method classify pairs of non-remote sensing images? The reviewer feels the proposed method does not consider the natural characteristics of remote sensing images. If applicable, the authors should discuss potential applications or experiments with non-remote sensing images. Besides, the authors should explain what characteristics of remote sensing images the proposed method leverages, if any.\n3. Why does orthonormal embedding reduce computation and use smaller mini-batches? If applicable, could the authors provide a more detailed explanation or proof of how orthonormal embeddings enable smaller batch sizes compared to standard contrastive learning approaches? One suggested way to explain it is to provide a computational complexity analysis or empirical runtime comparisons." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors introduce a novel solution of contrastive learning for paired image classification.\n2. The authors conduct comprehensive experiments to demonstrate the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a Latent Orthonormal Contrastive Learning (LOCAL) solution for paired image classification tasks. The proposed method can optimize class representation learning in an orthonormal fashion, which allows for the use of smaller mini-batches and addresses the class size imbalance. Theoretical analyses and extensive experiments demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are some grammatical errors/typos throughout the paper, which severely disturbs the readability. The reviewer recommends the authors should proofread or use a grammar checking tool to modify these typos throughout the paper. Some findings include but are not limited to: \n1) Page 2, line 93, there are two “thus” in this sentence.\n2) Page 3, line 152, there are two “is” in this sentence.\n3) Page 5, line 243, “??” is a typo and should be modified.\n4) Page 5-6, line 263 and line 272, the font color of these sentences is red. Are they typos?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The topic is interesting. The paper recognizes several drawbacks of the traditional contrastive loss when applied to tasks in satellite imagery, which has paired inputs, high memory cost and severe class imbalance, and proposes a targeted approach to these issues.\n- The proposed new loss function has both theoretical and empirical validation.\n- The proposed method has superior performance over baseline method on different datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new contrastive learning approach to mitigate the drawback of traditional supervised contrastive learning for tasks with high-resolution data (which result to small batch size) and severe class imbalance. Specifically, it optimizes class representations in an orthonormal fashion. It conducts experiments on paired image datasets and demonstrate the superior performance of the proposed method over the traditional contrastive loss." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I am a bit confused about the evaluation of the paired image dataset. What is the definition of the accuracy reported in Table 2? Do you calculate the accuracy for pre-disaster and post-disaster image together?\n- The baseline compared in the paper is not thorough. The paper only considers SCL (supervised contrastive learning). It addresses the problem of class imbalance, but does not compare with methods that have been dealing with class imbalance (e.g., the papers cited in the paragraph of Line 071 in the introduction) with itself. Also, it proposes to deal with high-resolution data which will lead to high memory cost, but I'm wondering how it will compare to other memory-saving strategies for contrastive learning, e.g., a memory bank in MoCo." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The core idea of LOCAL involves making class representations orthogonal in latent space. However, a fixed-dimensional feature space can only accommodate a limited number of orthogonal class vectors. When the number of classes exceeds the feature dimensions, ensuring orthogonality for all class representations becomes impossible. How do the authors address this limitation, and what are the potential implications for scalability in larger class settings?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Simplicity**: LOCAL is straightforward and easy to implement, making it accessible for practical applications.\n1. **Theoretical Analysis**: The authors provide a thorough theoretical analysis of the optimization objective of LOCAL, proving a bound on the loss.\n1. **Performance Improvement**: LOCAL achieves consistent performance improvements over SCL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel contrastive learning method aimed at addressing two issues with supervised contrastive loss: data imbalance and reliance on large batch sizes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Insufficient Experiments**: Although LOCAL is introduced for paired images, its applicability extends to long-tailed learning. The current experimental results significantly limit the scope of LOCAL. The paper could benefit from additional comparative experiments with other enhanced contrastive learning methods based on SCL to validate its broader effectiveness.\n1. **Lack of Discussion on Related Work**: For example, there is a need to discuss methods like ProCo^[1], which also address challenges related to class imbalance and the need for large batch sizes.\n\n[1] Probabilistic Contrastive Learning for Long-Tailed Visual Recognition. TPAMI 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024local,\ntitle={{LOCAL}: Latent Orthonormal Contrastive Learning for Paired Images},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3RrNfVWodl},\nnote={under review}\n}" }, "abstract": { "value": "Classification with comparative paired inputs, such as pre- and post-disaster satellite images, distinguishes classes of samples by encompassing dual feature sets that individually characterize a sample. Representation learning from comparative nature of the inputs calls for not only recognizing invariant patterns shared across all inputs but also effectively differentiating the contrastive attributes present between each pair of inputs. Supervised Contrastive Learning (SCL) aims to learn representation that maximally separates different classes and condenses within individual classes, thereby attaining an adversarial equilibrium. However, this equilibrium typically relies on the assumption of balanced data and large batch sizes for sufficient negative sampling. These issues are exacerbated when applied to paired satellite images due to increased computational load, high-resolution data, and severe class imbalance. To address these challenges, we introduce Latent Orthonormal Contrastive Learning (LOCAL), an approach that optimizes class representations in an orthonormal fashion. By learning each class to a unique, orthogonal plane in the embedding space, LOCAL is efficient with smaller batch sizes, provably effective regardless of class size imbalance, and yields more discriminative information between pairs of inputs via a feature correlation module. Experimental results on paired image data demonstrate superior performance of LOCAL over SCL, offering a powerful alternative approach for paired input analysis." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "paired images", "representation learning", "supervised contrastive learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0dc21af8abc893987e05d1f2acb0c47188962833.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/4fe22ac72cd0846ac8339818d350f789c5833e1f.pdf" }, "title": { "value": "LOCAL: Latent Orthonormal Contrastive Learning for Paired Images" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3SMBSTG3qN
Beyond CVaR: Leveraging Static Spectral Risk Measures for Enhanced Decision-Making in Distributional Reinforcement Learning
main
Active
Reinforcement Learning;Distributional Reinforcement Learning;Risk Aversion;Spectral Risk Measures;Time-Consistency
reinforcement learning
3;5;6;6
4;4;3;2
3;3;3;3
1;3;3;3
2;2;2;3
5
3.25
3
2.5
2.25
-0.738549
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tIn Table 1, when the objective is CVaR(0.1), why does QR-SRM with $\\alpha=0.1$ not achieve the highest value? Could the authors clarify the reasons influencing this outcome?\n2.\tThe authors introduce the decomposition theorem for SRMs (Theorem 2). Could they use one of the four examples to illustrate how this theorem applies in a practical scenario? This would help readers better understand these concepts." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThis paper is easy to follow and well-organized. \n2.\tThe theoretical analyses throughout the paper are technically sound and comprehensive, providing strong support for the proposed method.\n3.\tThe authors preform thorough numerical studies across four examples. The proposed algorithm outperforms several baselines, highlighting its potential for real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work studies the problem of incorporating static spectral risk measures (SRM into Distributional Reinforcement Learning (DRL) to enable more flexible and interpretable risk-sensitive decision-making. Unlike conventional Conditional Value-at-Risk (CVaR), SRMs offer a spectrum of risk preferences, allowing for more flexible risk-sensitive policies. The authors argue that using SRMs in DRL enables more flexible and interpretable policies, as SRMs allow for a spectrum of risk preferences rather than a fixed measure like CVaR. The authors propose an iterative DRL algorithm that utilizes a two-stage optimization process to optimize SRMs. They provide theoretical guarantees, proving convergence and characterizing the temporal decomposition of SRMs within the DRL framework. This decomposition enhances interpretability, as it captures how risk preferences evolve over time. The algorithm’s effectiveness is demonstrated through extensive numerical studies across four example environments, where it outperforms several baseline models, highlighting its potential for real-world applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "See Question section" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The authors use an extended state space to solve the inner optimization problem. Can you provide rigorous justification\n- The proposed algorithm's computational complexity is not thoroughly analyzed. Given the bilevel optimization algorithmic framework, and the added complexity of optimizing SRM in a distributional RL framework, the computational complexity of the proposed algorithm may be a concern.\n\nsome missing references about DRL for RSRL\n- Keramati, R., Dann, C., Tamkin, A. and Brunskill, E., 2020, April. Being optimistic to be conservative: Quickly learning a CVaR policy. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 04, pp. 4436-4443).\n- Liang, H. and Luo, Z.Q., 2024. Bridging distributional and risk-sensitive reinforcement learning with provable regret bounds. Journal of Machine Learning Research, 25(221), pp.1-56.\n- Chen, Y., Zhang, X., Wang, S. and Huang, L., Provable Risk-Sensitive Distributional Reinforcement Learning with General Function Approximation. In Forty-first International Conference on Machine Learning." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The application of SRM to DRL for more interpretable risk-sensitive policies is innovative, introducing valuable theoretical insights and practical tools for risk-sensitive control.\n- Theoretical grounding and comprehensive experimental evaluations\n- The problem formulation, motivation, and results are well-articulated, though some technical details could be simplified." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses limitations in current risk-sensitive Distributional Reinforcement Learning by introducing an algorithm that optimizes for a broader class of static SRM, moving beyond the commonly used CVaR. The proposed QR-SRM algorithm utilizes SRMs to adjust the agent's risk sensitivity dynamically, improving policy interpretability and adaptability. Extensive experiments on environments demonstrate that QR-SRM achieves superior performance and consistency with SRM objectives." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Certain theoretical sections, especially around SRM decomposition, may challenge readers due to dense terminology and complex proofs. More illustrative examples or simplified explanations could improve accessibility.\n- The paper assumes specific properties of SRMs and fixed initial preferences, which may limit the algorithm's flexibility in dynamic environments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In Line 232, shouldn't it be $h_{l+1} = \\arg \\max _h \\mathbb{E}[h(G^{\\pi^*_l})] + \\int_0^1 \\hat{h}(\\phi(u)) du$? \nI'm wondering if $ \\int_0^1 \\hat{h}(\\phi(u)) du=0$ is inherently guaranteed within the algorithm, or if there is a condition that ensures this which I may have missed.\n\n- In Line 383, $\\alpha=0.6$ seems to be maximized at $CVaR_{0.8}$, and $\\alpha=0.4$ at $CVaR_{0.6}$. Although the small vertical line intervals may be minor, the lack of alignment with targeted risk levels raises concerns.\n\n- In Table 2, aren't the cases with $\\alpha=1.0$ essentially QR-DQN?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper addresses a well-motivated problem by proposing an algorithm with asymptotically optimal regret bounds for scenarios involving trajectory-level feedback. \n\n- Up to Section 4, the authors clearly outline the motivations, objectives, and proof sketches, making it easier for readers to grasp the core concepts.\n\n- The authors included code for reproducibility and conducted experiments across diverse environments, adding practical value and robustness to the study." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel distributional reinforcement learning algorithm called QR-SRM, which extends beyond expected return by incorporating spectral risk measures. \nThe authors provide convergence guarantees and enhance interpretability of policies by decomposing coherent risk measures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- A minor weakness is the need for improved visualization in the experimental results. The vertical lines are not immediately distinguishable, so adjusting the dash spacing, line thickness, or adding markers would enhance clarity. Additionally, using consistent labels for the same algorithm in the legend would help reduce reader fatigue.\n\n- My main concern is that the low performance of the experimental results makes it difficult to be confident that the algorithm was correctly reproduced. According to [1], QR-DQN performs at least 100 points on LunarLander-v2 after 0.1M steps. However, Table 3 of this paper shows much lower scores, suggesting the results may not be fully reproducible. Can the authors clarify?\n\n- Although the experiments were conducted in various environments, the aforementioned concerns about reproducibility make fair comparison with other baselines challenging. Reporting performance on some Atari environments, commonly used by algorithms that assume discrete action spaces, would provide a more reliable basis for apple-to-apple comparison.\n\n\n[1] Cho, Taehyun, et al. \"Pitfall of optimism: distributional reinforcement learning by randomizing risk criterion.\" Advances in Neural Information Processing Systems 36 (2024).\n\nTypos\n\n- Line 197: \"Coheret\" should be \"Coherent\"\n\n- Line 230: $[h_l (G^{\\pi})]$ should be $\\mathbb{E}[h_l (G^{\\pi})]$\n\n- Line 286: \"Output\" should be in bold." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "(I) What are the contribution of section 4 compared to [1]? Please clearly indicate which methods are re-written from [1] and what is new in this paper, in the main body and also appendix A and B. \n\n(II) Please address the limitation pointed out in Weaknesses section (III):Did not discuss the main limitation of extending static spectral risk MDP to model-free/distributional RL. Specifically:\n- (a) There is no analysis of the convergence properties of the algorithm 2 TD loss update (a static risk variant similar to [2]).\n- (b) Missing analysis for approximation errors and guarantees arising from quantile discretization $\\tau_i$.\n- (c) Contraction analysis is missing, considering that the minimizer of the Huber loss may not be unique (see [3]).\n\n\nReferences:\n\n[1] Nicole Bauerle and Alexander Glauner. Minimizing spectral risk measures applied to Markov decision processes. Mathematical Methods of Operations Research, 94(1):35–69, 2021." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The strengths of this paper lie in its deep understanding of the current state of research in risk-averse reinforcement learning (RL) and the limitations of recent work in risk-averse distributional RL (DRL). Specifically, the paper identifies key challenges: (1) dynamic and fixed risk DRL approaches often lack interpretability, and (2) the dual representation of coherent risk measures encounters issues during policy optimization. The authors’ solid grasp of the mathematical foundations behind risk measures enables them to combine and present a concise and theoretically sound introduction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to extend the work of Bauerle and Glauner [1] on static spectral risk measures (convex combinations of CVaR) in Markov Decision Processes (MDPs) to the context of distributional reinforcement learning (RL). Sections 4 and Appendices A and B reformulate the approach from [1] using distributional value functions. Theorem 1 provides a bound on the performance of the policy derived from greedy action selection over an augmented state space (x, s, c). Algorithm 2 proposes the TD error computation for distributional value function, contrasting with methods like QR-DQN and IQN, by directly addressing for static spectral risk measures. Theorem 2 extends the decomposition from coherent risk measures [2] to a broader class of spectral risks, increasing the generalizability of the approach to a wider array of risk-sensitive applications. Finally, the experiments validate the proposed algorithm, offering evidence of its efficacy and robustness within this distributional risk-sensitive framework.\n\nReferences:\n\n[1] Nicole Bauerle and Alexander Glauner. Minimizing spectral risk measures applied to Markov decision processes. Mathematical Methods of Operations Research, 94(1):35–69, 2021.\n\n[2] Georg Ch. Pflug and Alois Pichler. Time-Consistent Decisions and Temporal Decomposition of Coherent Risk Functional. Mathematics of Operations Research, 41(2):682–699, 2016." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite the authors’ strong grasp of the limitations in risk-averse distributional RL research, this paper has several notable weaknesses:\n\n(I) The primary weakness lies in the limited originality of the contributions. Much of the content in Theorems and Lemmas in Appendix A, B, and Section 4 is directly adapted from [1], with only minor modifications to distribution value function representation. This reliance raises questions about the novelty and depth of the contributions, compared to [1]. \n\n(II) While the Introduction and Preliminaries are well-articulated, Sections 4 and 5 suffer from clarity issues. Section 5 appears disconnected from previous sections. Spectral risk measures can be represented as convex combinations of CVaR, Theorem 2 in Section 5 leverages this property to extend the dual decomposition from coherent risk to general spectral risk measures. However this dual decomposition is unrelated to algorithm 1 and 2 in the earlier sections.\n\n(III) Did not discuss the main limitation of extending static spectral risk MDP to model-free/distributional RL. Specifically:\n- (a) There is no analysis of the convergence properties of the algorithm 2 TD loss update (a static risk variant similar to [2]).\n- (b) Missing analysis for approximation errors and guarantees arising from quantile discretization $\\tau_i$.\n- (c) Contraction analysis is missing, considering that the minimizer of the Huber loss may not be unique (see [3]).\n\nReferences:\n\n[1] Nicole Bauerle and Alexander Glauner. Minimizing spectral risk measures applied to Markov decision processes. Mathematical Methods of Operations Research, 94(1):35–69, 2021.\n\n[2] Shen, Yun, et al. \"Risk-sensitive reinforcement learning.\" Neural computation 26.7 (2014): 1298-1328.\n\n[3] Rowland, Mark, et al. \"An analysis of quantile temporal-difference learning.\" (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024beyond,\ntitle={Beyond {CV}aR: Leveraging Static Spectral Risk Measures for Enhanced Decision-Making in Distributional Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3SMBSTG3qN},\nnote={under review}\n}" }, "abstract": { "value": "In domains such as finance, healthcare, and robotics, managing worst-case scenarios is critical, as failure to do so can lead to catastrophic outcomes. Distributional Reinforcement Learning (DRL) provides a natural framework to incorporate risk sensitivity into decision-making processes. However, existing approaches face two key limitations: (1) the use of fixed risk measures at each decision step often results in overly conservative policies, and (2) the interpretation and theoretical properties of the learned policies remain unclear. While optimizing a static risk measure addresses these issues, its use in the DRL framework has been limited to the simple static CVaR risk measure. In this paper, we present a novel DRL algorithm with convergence guarantees that optimizes for a broader class of static Spectral Risk Measures (SRM). Additionally, we provide a clear interpretation of the learned policy by leveraging the distribution of returns in DRL and the decomposition of static coherent risk measures. Extensive experiments demonstrate that our model learns policies aligned with the SRM objective, and outperforms existing risk-neutral and risk-sensitive DRL models in various settings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reinforcement Learning", "Distributional Reinforcement Learning", "Risk Aversion", "Spectral Risk Measures", "Time-Consistency" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3a83e661028227edf972a89a977ebe0266f149ba.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/45efddf61d2ea2ef8fdc3d6e67ef680254b84c6b.zip" }, "title": { "value": "Beyond CVaR: Leveraging Static Spectral Risk Measures for Enhanced Decision-Making in Distributional Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3TnLGGHhNx
From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities
main
Active
Multimodal Large Language Models;Image Tokenizer;Token Merge
foundation or frontier models, including LLMs
5;5;6
3;3;3
2;2;3
2;2;2
2;2;3
5.333333
3
2.333333
2
2.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why is there a 0.99 in L770 and L253? Is this made up?\n2. For Figure 2 (b) (c), all settings converge after ~150 iterations. I don't think they make any difference.\n3. Why vocabulary changes from 8k-> 16k, there is a performance drop? I can not find any evidence in the proof that can demonstrate this point.\n4. The proposed BPE is very similar to super pixel or clustering algorithm. Authors should discuss the difference and compare the performance. \n5. In table 5.1, authors can add another classical setting: During SFT, visual encoder is frozen." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. From the results shown to us, there is some improvement.\n2. Experiment settings are clear" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tried to use BPE for image tokenization. From the results shown to us, there is some improvement." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Performance is poor compared to any CLIP style or even DINO style MLLM as the visual encoder.\n2. There is no projector in the experiments. This could be an extreme unfair setting compared to classical pipeline.\n3. I do not think proofs are helpful to understand what is going on in the experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The LLM+VQ+BPE is also supervised finetuned on LLaVA-One-Vision, etc data, however, is far behind LLaVA-OneVision and other models that trained on these data. Then what's the benefit of this VQ+BPE compared with previous MLLM practices?\n2. Is this VQ+BPE applied to other LLM beyond LLaMA-3.1-8B and get similar observations?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This BPE image tokenization approach is novel and that potentially help the transformer better understand alignment between text and image with a semantic image token.\n2. There is a theoretical analysis on how BPE tokenize benefits transformers learning in Section 3.\n3. The scaling of BPE is reflected in that the model improves when adding larger scale of data such as ShareGPT4, etc." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to apply Byte-Pair Encoding to visual data, which first encode image into discrete token IDs, and then train the BPE image tokenize to get image tokens with semantic prior (e.g., previously image tokens for 'a white cat' is separated in the sequence of image tokens, after BPE, one token is representing 'a white cat'). The experiments are mainly based on applying BPE image tokenizer training to LLaMA-3.1 and compare on VQAv2, MMBench, etc." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experimental evidences are kind of weak. First, it's far behind current MLLMs SOTA on public benchmarks. For example, the best presented number of proposed model is LLM+VQ+BPE with Additional scaling (SFT) , which achieves 60.6 on VQAv2, 44.0 on MMBench, and 48.2 on VizWiz, which is far behind similar size 7B LLaMA-based MLLMs.\n2. Second, the ablation is not sufficient to show the benefit of BPE image tokenizer. Only one Table results compare LLM+VQ and LLM+VQ+BPE. The details of these two models are not illustrated, e.g., what is exactly implemented for LLM+VQ." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.Applicability of the Theoretical Model: How does a simplified 2D Markov process adequately capture the complex structure of real-world image data?\n\n2.Sensitivity to Information Loss: How is the potential impact of information loss in tokenization, especially for detail-sensitive tasks, theoretically assessed?\n\n3.Task Representativeness and Generalization: How can results on VQA-like tasks ensure performance on precision-demanding tasks like image caption?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.This paper creatively adapts byte-pair encoding (BPE) for images, aiming to make visual data work more seamlessly with text in multimodal models.\n\n2.The approach integrates structural information directly into image tokens, which could help models better understand and align visuals with text, showing solid potential in cross-modal tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel BPE image tokenizer that brings byte-pair encoding (BPE) to image tokenization, enhancing multimodal large language models (MLLMs) in aligning visual and textual information. By embedding structural priors into image tokens, the method shows promise in cross-modal tasks and scalability, offering a new approach to unified visual and text processing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Theoretical framework has several notable limitations:\n 1.1Lack of Multimodal Fusion Analysis: The paper’s theoretical analysis is focused on 2D image data alone and does not delve into how the BPE tokenizer facilitates the fusion of visual and textual information. Multimodal tasks typically require deep semantic and structural alignment across modalities, which is not sufficiently addressed in this analysis. This omission limits the theoretical support for the tokenizer’s efficacy in a multimodal model context.\n 1.2Absence of Analysis on Information Loss in Tokenization: The paper lacks a theoretical exploration of the potential information loss from BPE tokenization, such as the simplification of high-frequency visual details. There is no quantification of how the loss of these details might impact overall model performance. This gap in the analysis leaves the question of how well the BPE tokenizer preserves image details unanswered.\n\n2.A notable limitation of this paper is its focus on evaluating the BPE image tokenizer primarily through VQA-like tasks, which generally require only broad semantic alignment across modalities. While effective for assessing general multimodal comprehension, these tasks may not fully capture the demands of applications like image segmentation or image caption, where finer-grained visual detail and spatial relationships are crucial. Without evaluation on these more intricate tasks, it remains unclear how well the method handles scenarios that require detailed visual representation, potentially limiting its applicability to real-world multimodal use cases that demand high visual fidelity." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a novel BPE Image Tokenizer that applies BPE tokenization to visual data, enabling more effective integration of visual information in MLLMs and improving their performance." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024from,\ntitle={From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3TnLGGHhNx},\nnote={under review}\n}" }, "abstract": { "value": "Multimodal Large Language Models have made significant strides in integrating visual and textual information, yet they often struggle with effectively aligning these modalities. We introduce a novel image tokenizer that bridges this gap by applying the principle of Byte-Pair Encoding (BPE) to visual data. Unlike conventional approaches that rely on separate visual encoders, our method directly incorporates structural prior information into image tokens, mirroring the successful tokenization strategies used in text-only Large Language Models. This innovative approach enables Transformer models to more effectively learn and reason across modalities. Through theoretical analysis and extensive experiments, we demonstrate that our BPE Image Tokenizer significantly enhances MLLMs' multimodal understanding capabilities, even with limited training data. Our method not only improves performance across various benchmarks but also shows promising scalability, potentially paving the way for more efficient and capable multimodal foundation models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Models", "Image Tokenizer", "Token Merge" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/262fa2eb02a6924e7d350ce9a33d207c650680fb.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3UB4NaEb1g
Decoding Intelligence: A Framework for Certifying Knowledge Comprehension in LLMs
main
Active
Large Language Models;Reasoning;Information Extraction;Certification
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
3;3;5;6
4;4;3;3
2;4;3;3
2;2;2;2
2;3;3;3
4.25
3.5
3
2
2.75
-0.96225
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- L.65: strictly speaking, it’s not just the high number of parameters, right? Also nonlinearities?\n- What makes R in L.274 a random variable? Is the any(.) operator effectively a uniform distribution? How is it defined? Later, when the paper says “we equally prioritize the different possible path lengths”, does this mean that the any(.) operator appropriately reflects this choice, or is there any bias in the estimator?\n- Why use a multiple-choice format? Is the task too difficult otherwise?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Important: I like the spirit of trying to give formal guarantees to model correctness for LLMs, which are difficult to handle analytically. The approach of using binomial proportion confidence intervals is a simple but appropriate one.\n- Important: Experiments are carefully designed to demonstrate the main claims of the paper. A wide variety of models are tested." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to provide formal guarantees of model knowledge, as elicited by prompts derived from Wikidata. The object of the formal guarantee is the correctness of an answer to a question representing an arbitrary k-hop question stemming from some pivot node in the Wikidata knowledge graph. The means of formal guarantee is a binomial proportion confidence interval. To the best of my understanding, what this means is that the method guarantees model correctness with high confidence over a subgraph of Wikidata. The reason this requires a probabilistic guarantee, and cannot be done exhaustively, is that the combination of contexts for the questions, distractor text to provide alongside context, and few-shot examples for prompting the method creates a large prompt space that would be infeasible to exhaustively search. Experiments demonstrate that the authors can often bound model accuracy over a subgraph of Wikidata within about +/- .05 points." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Important: While the main result in this paper is interesting, I also find it hard to say that it is especially impactful. The basic approach is to use a binomial proportion confidence interval to estimate a model accuracy over a data distribution. The only way that this setting differs from any typical ML benchmark is that the authors define a data distribution over a subnetwork of Wikidata. As the authors note in L.242, longer paths in k-hop questions can result in somewhat surprising or meaningless queries. So I ask, what is really the point of certifying knowledge over such a subgraph? As in the qualitative example, we are not certifying knowledge about a movie. Rather, we are certifying knowledge about a movie, as well as a surprisingly diverse set of entities that are related to the movie. And, even if we were certifying knowledge about a movie, the next question is if the method in this paper merits publication if it primarily just makes use of an existing analytical binomial proportion confidence interval.\n- Of some importance: While I believe the central point that we cannot exhaustively test deep learning models over input spaces is well-received, the paper has to introduce some complexity in order to make this difficulty appear in the first place in their setting. Specifically, aliases, contexts, distractors, and few-shot examples are randomly ordered in order to make the input space too large to exhaustively search. I believe it would also be possible to fix a basic set of instructions for strong models like Gemini and do these questions zero-shot. In that setting, there would not be a large combinatorial space to explore. Or, it might be more appropriate to generate model-based paraphrases of the input question, which may be more naturally representative of knowledge query diversity than the chosen approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Given that the framework already uses a probabilistic approach, why not leverage the fact that an LLM can act as an implicit conditional probabilistic model? For instance, adjusting the output threshold or re-querying could yield probabilities that reflect comprehension more accurately.\n2. Relating to weakness 3: Why are aliases for entities and relations randomly sampled? This approach may inadvertently query the LLM’s embedded knowledge (e.g., recognizing that alias A corresponds to B), which might not be present in the provided context.\n3. Regarding sampling: How does the chosen sample of n=250 compare in ratio to the full knowledge graph? Additionally, how do we justify that this sample is unbiased, given that only the top 2000 nodes and edges are selected?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Provides a robust quantitative probabilistic framework for evaluation.\n2. Overall, the presentation is clear and the structure flows well.\n3. Accompanied by code for reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce QuaCer-C, a framework designed to assess knowledge comprehension in LLMs by sampling paths of lengths 1 to 5 from the WikiData5m knowledge graph and constructing context + distraction + query sets as tasks for the models to complete. Since responses are evaluated on a success/fail binary basis, Clopper-Pearson confidence intervals are used to establish upper and lower bounds for the resulting metrics. Experiments on major closed-source and open-source LLMs indicate that larger models perform better, while shuffled contexts and added distractors degrade performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The findings are somewhat predictable and could benefit from deeper insights.\n2. Some redundant content in the main text could be replaced by key details currently in the Appendix, such as the process for generating queries and context.\n3. There’s ambiguity as to whether the LLM’s responses are derived from embedded knowledge or the provided context, thus raising questions about true comprehension. The prompt itself does not restrict the LLM to answer based solely on the given context. For example, in a hypothetical question like “Matthew Perry→(character_acted_by)→(birth date)→?”, the LLM could respond from its internal knowledge base rather than relying solely on the provided context." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Is the evaluation process merely a standard method of sampling questions from a knowledge graph to assess LLMs? If so, why not utilize existing KBQA/GraphQA datasets?\n2. Is there any theoretical guarantee for the bounds introduced?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper provides a detailed description of the approach, including the formalization, theoretical framework, and algorithmic implementation.\n2. Models of varying sizes and employing different pretraining techniques are evaluated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to develop a formal certification framework for evaluating knowledge comprehension in LLMs. The authors propose an approach that frames knowledge comprehension as a formal specification using a knowledge graph. The accuracy of LLM responses is assessed by the probability of generating correct answers to knowledge comprehension prompts sampled from a distribution based on the knowledge graph. However, this approach, in my opinion, closely resembles a basic KBQA evaluation process for LLMs and lacks difference compared to existing work. Furthermore, current proprietary models, such as Gemini-Pro and GPT-4o, have already demonstrated impressive accuracy in knowledge utilization, as shown in Figure 3, with performance scores between 0.7 and 0.8, which raises questions about the significance of this task and the motivation of this work." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper provides an extensive and complex introduction and description of the approach for formalizing knowledge comprehension evaluation using a knowledge graph. The knowledge comprehension capability of LLMs is assessed by measuring the accuracy of their responses to prompts sampled from paths within the knowledge graph. However, (1) there is no rigorous theoretical proof to guarantee the approach, and (2) it appears to be a very basic, standard KBQA evaluation process using LLMs nowadays, lacking distinction from existing work. I find the motivation, novelty, and differentiation of this work unclear. Some related work is omitted like [1,2]\n\n[1] Zha, Hanwen, Zhiyu Chen, and Xifeng Yan. \"Inductive relation prediction by BERT.\" Proceedings of the AAAI conference on artificial intelligence. Vol. 36. No. 5. 2022.\n[2] He, Xiaoxin, et al. \"G-retriever: Retrieval-augmented generation for textual graph understanding and question answering.\" arXiv preprint arXiv:2402.07630 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "### Questions\n1. Does the approach also work without few-shot examples? Or are they needed to convey the answer format?\n2. How long is the context (per node) and does it contain information beyond the Wikipedia page's abstract? The authors mention that only one distractor is included due to context size limitations. However, all the studied models support, or have versions that support, context lengths of at least 32k. At least if Llama-3.1-8B was used, rather than Llama-3-8B, which I'm not sure about.\n3. Do the authors think that techniques like chain-of-thought prompting would change the results? The paper investigates problems that inherently require multi-step reasoning, whereas the evaluation expects models to produce the answer almost as the first token. Allowing for additional reasoning steps might significantly improve accuracy.\n\n### Suggestions\n1. It would be helpful to include the appendix into the main paper's PDF, not as a separate file.\n2. It might be helpful to name the certified property, i.e. \"our overall property\", line 228.\n3. Currently, prompts are constructed by sampling graph trajectories starting from a specific root node. Another interesting approach might be to sample trajectories whose edges all have the same relationship type, e.g. which are all of the form \"... -> (appeared in movie) -> (directed by) -> ?\", irrespective of the nodes that appear in them. Such an approach might be able to assess/certify how well a model can comprehend knowledge about a particular multi-step relationship, irrespective of what the exact entities are, e.g. how well the model can comprehend which directors an actor worked with." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Overall, the paper is well-written and easy to follow.\n2. I believe that the introduced knowledge comprehension test can serve as a useful benchmark for evaluating the ability of LLMs to retrieve information from prompts, reason about that information and use it to answer complex questions.\n3. The paper provides a comprehensive assessment of the knowledge comprehension abilities of many of the currently most popular LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces QuaCer-C, a protocol to assess the knowledge comprehension abilities of LLMs, i.e. their ability to extract information from reference inputs and reason over it to answer questions. To this end, the authors introduce an evaluation protocol that constructs multi-step reasoning queries together with multiple-choice answers and gathers reference information based on traversing a knowledge graph. By sampling many paths starting from the same root node, the authors are able to estimate confidence intervals for whether an LLM will answer knowledge comprehension queries based on that root node correctly. The paper uses this approach to quantitatively and qualitatively evaluate the knowledge comprehension abilities of several popular open-weight as well as closed models. The results show that closed models significantly outperform closed ones, and sheds lights on the failure modes of knowledge comprehension." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I am not sure in which situations the correctness certificates derived by QuaCer-C would be useful. The certificates hold for prompts sampled from the same distribution as the 250 sample prompts. But that means that certificates can only be obtained for cases where a corresponding knowledge graph exist and the relevant queries can be expressed as graph traversals. But in such cases, it would be much simpler to query the knowledge graph directly, rather than using an LLM to extract information from and reason over it. The cases where LLM-based knowledge comprehension is actually required are typically much less structured documents without a corresponding knowledge graph, but in those cases QuaCer-C cannot compute certificates. I still think the prompt construction and evaluation approach can serve as a useful benchmark for the knowledge comprehension abilities of LLMs, but I don't see a scenario where the derived certificates would be useful.\n2. The approach might incorrectly mark answers as wrong in case of 1 - n or m - n relationships. E.g. in Figure 4, first row, in the example \"Batman Begins: ... -> (characters) -> (artist) -> (nomination received) -> ?\" there could be multiple characters (1 - n relationship) whose artists might have received different nominations. The model might pick a different but valid character than intended and then reason correctly, potentially using its parametric knowledge, and arrive at a different than expected, but still correct answer.\n3. The paper claims that larger models are better at knowledge comprehension. While Table 1 provides some evidence in this direction, I believe that it is insufficient to confidently claim a size-dependent relationship, because of a number of potential confounders: 1) The smaller models in the table are all open source ones, while the larger ones are closed, and their size is not (officially) known. 2) Except for (Phi-3-3B, Phi-3-14B) and (Gemini-1.5-Flash, Gemini-1.5-Pro) (whose size difference and other potential differences are unknown), all models belong to different families, so other than size they might also differ in training data mixtures, training strategies and architecture. The only really comparable datapoint here is (Phi-3-3B, Phi-3-14B), and those two models show 1% or less difference. To make claims about model size effects more reliable, comparisons of several (open) models within the same family would be needed.\n4. Minor clarity issue: It was not clear to me how the few-shot examples are constructed until I came across Appendix B.5. Please reference the appendix in the main paper and provide some minimal information about the few-shot examples, e.g. that the same fixed set of examples is used for all prompts." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We certify LLMs for their knowledge comprehension capabilities." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024decoding,\ntitle={Decoding Intelligence: A Framework for Certifying Knowledge Comprehension in {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3UB4NaEb1g},\nnote={under review}\n}" }, "abstract": { "value": "Knowledge comprehension capability is an important aspect of human intelligence. As Large Language Models (LLMs) are being envisioned as superhuman\nagents, it is crucial for them to be proficient at knowledge comprehension. However, existing benchmarking studies do not provide consistent, generalizable, and\nformal guarantees on the knowledge comprehension capabilities of LLMs. In\nthis work, we propose the first framework to certify knowledge comprehension in\nLLMs with formal probabilistic guarantees. Our certificates are quantitative - they consist of high-confidence, tight bounds on the probability that a target LLM\ngives the correct answer on any knowledge comprehension prompt sampled from\na distribution. We design and certify novel specifications that precisely represent\ndistributions of knowledge comprehension prompts leveraging knowledge graphs.\nWe certify SOTA LLMs for specifications over the Wikidata5m knowledge graph.\nWe find that the knowledge comprehension capability improves significantly with\nscaling the size of the models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Reasoning", "Information Extraction", "Certification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6265462159d5326109b7465aa59b810483d1cab9.pdf" }, "presentation": null, "primary_area": { "value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/570b97193aa5cd6a6dfa8a3d05b1b3a9041a5edd.zip" }, "title": { "value": "Decoding Intelligence: A Framework for Certifying Knowledge Comprehension in LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3UKOzGWCVY
Learn-by-interact: A Data-Centric Framework For Self-Adaptive Agents in Realistic Environments
main
Active
Data synthesis;Agent;Adaptation
applications to computer vision, audio, language, and other modalities
5;6;6;8
4;4;3;4
3;3;3;4
2;2;3;4
2;3;3;4
6.25
3.75
3.25
2.75
3
0.132453
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Generalizability of this method, for example, for the web domain, it's kinda cheating to use these sources of documentation? WebArena is essentially built on open-source alternatives of these sources. It might be interesting to explore removing the reliance on documentation, as it may not be strictly necessary; maybe you can just ask the LLM to propose task instructions based on the given environment?\n\nIn your dataset, is it possible for one data sample to be a sub trajectory of another sample?\n\nOn WebArena, why do you choose Step as your baseline method rather than more direct baseline used in the original paper of WebArena?\n\nTypos:\nline 44: desktop computing etc. -> desktop computing, etc.\nAlg 1: initilize interaction trajectory -> initialize interaction trajectory\nAlg 2: Langueg Model -> Language Model" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem addressed in this paper is highly significant and of great interest to the language agent community. Due to the scarcity of process/trajectory data available online, the community is eager to find efficient and scalable methods to obtain more trajectory-level data for training. The dataset collected in this paper might be a valuable resource to the community.\n2. This paper covers multiple domains and demonstrates promising results on all of them, which shows the effectiveness of the data collection process.\n3. This paper conducts comprehensive analyses, including scaling law of the training data and the effect of trajectory length." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to address the critical problem of data collection for training language agents. Annotating trajectory-level data in various environments can be quite expensive. To deal with this, this paper instead proposes a data synthesis pipeline that leverages the documentation available on the internet to generate high quality task instructions, and use the LLM to compose the corresponding trajectory for each instruction. Specifically, the error rate of directly generating the trajectory using LLM can be quite high. As a result, this paper proposes a novel scheme called backward construction to summarize the trajectory and refine the original instruction to make it align better with the generated trajectory. In addition, they also use LLMs to filter out low-quality data points. After obtaining the synthetic data, they use them for both fine-tuning and ICL in multiple different domains, including code agent, OS agent, and web agent. Experimental results show the effectiveness of their synthetic data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A key issue with the data synthesis pipeline proposed in this paper is its generalizability. Specifically, the pipeline relies on a set of source documentation to generate meaningful task instructions, serving as the starting point of the entire process. However, the assumption that in-domain documentation will always be available may not hold in all cases.\n2. Related to the first point, the reliance on in-domain documentation might also set a ceiling for the size of the dataset. Also, the scaling law in this paper (i.e., Fig 3) suggests that achieving continuous gains becomes challenging once around 100k data samples have been collected." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Can you show examples of what trajectories get filtered out?\n- Was LATS implemented by the authors for the benchmarks tested? As far as I'm aware the original LATS didn't evaluate on the benchmarks tested" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The proposed approach for generating exemplars for ICL is novel and effective across several agentic scenarios.\n- The paper is well written and easy to follow\n- The experiments and ablations are very thorough, and validate the components of the proposed method well" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes learn-by-interact, which generates task-specific exemplars which are curated using backward construction which annotates the trajectory with an aligned objective instruction, and filtering using a committee of LLMs.\nThe results are evaluated on a wide array of benchmarks, SWE-bench, WebArena, OSWorld and Spider2-V, showing the effectiveness over strong baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The discussion/details on filitering of synthesized trajectories could be improved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- As mentioned above, the proposed backward construction may have quadratic complexity. I note the relevant discussion in Figure 2, but it is unclear whether this figure applies to inference only or the entire training-inference pipeline.\n\n- On page 3, Algorithm 1 appears to lack a definition of the L() function presented in line 11. Does this function rely on the same LLM backbone as the other function, LLM(), mentioned above?\n\n- On page 5, Table 1, the drop rate is relatively high for OSWorld and Spider2-V, particularly for the latter, where fewer than 20% of synthesized samples are retained. This appears inefficient. Could the authors provide more discussion on this matter?\n\n- Following Question 3, could the authors assess the potential impact of this filtering rate on final performance? For example, if a less strict filtering rule is applied, retaining more samples, how would this affect overall performance?\n\n- In Algorithm 2 on page 4, what is the difference between the append() operation (line 10) and the += operator (line 19)?\n\n- (Minor) It seems that all evaluations lack the percentage unit (%) for accuracy." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "In contrast to conventional data synthesis approaches, the proposed backward construction leverages unsuccessful trajectories, thereby improving data collection efficiency. This idea bears a high-level resemblance to the renowned reinforcement learning algorithm Hindsight Experience Replay, which is elegant and proven effective. The paper also provides comprehensive experiments covering performance, efficiency, and scalability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel data synthesis framework to enhance agent performance. Contrary to the conventional forward data construction, the proposed backward construction generates instructions from interaction trajectories with the environment. The synthesized data can be used for training the generator or in-context learning. Experiments across four environments demonstrate the potential of this method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "However, as shown in Algorithm 1 (lines 16-21), the proposed backward construction has quadratic complexity concerning trajectory length, $O(\\text{len}(T)^2)$. This raises concerns regarding data collection efficiency and potentially higher computational costs than conventional forward construction. I am open to raising my score if the authors address the following concerns listed in the Questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refer to the concerns in “Weaknesses”" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. To the best of my knowledge, the backward construction mechanism is novel.\n2. The paper is well written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a data synthesis method named “LEARN-BY-INTERACT” that uses LLM to generate instructions and trajectories based on given environmental documents, and the synthesized data can be used in ICL and training scenarios to improve the performance of agents. Experiments conducted on 4 agent datasets validate the effectiveness of LEARN-BY-INTERACT." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the authors claim that the proposed LEARN-BY-INTERACT can adapt LLM agents to any given environments without human annotations in both abstract and conclusion sections, but I think its application may not be very wide, it needs the environment to have corresponding documentation, and the LLM used to synthesize the data should be familiar with the environment, otherwise it is difficult for the LLM to synthesize valid instruction and trajectory. More discussion about the limitations of the methodology would make this paper better.\n\n2. There are many works that focus on using more powerful LLMs to synthesize data to improve agent performance, such as AgentGen [1] and AgentTuning [2], but this paper does not discuss or compare them.\n\n3. This method requires a lot of LLM calls to generate and filter the data, especially the backward construction phase, which seems costly. \n\n[1] AGENTGEN: Enhancing Planning Abilities for Large Language Model based Agent via Environment and Task Generation\n[2] Agenttuning: Enabling generalized agent abilities for llms." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A data-centric framework to adapt LLM agents to any given environments without human annotations" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learnbyinteract,\ntitle={Learn-by-interact: A Data-Centric Framework For Self-Adaptive Agents in Realistic Environments},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3UKOzGWCVY},\nnote={under review}\n}" }, "abstract": { "value": "Autonomous agents powered by large language models (LLMs) have the potential to enhance human capabilities, assisting with digital tasks from sending emails to performing data analysis. The abilities of existing LLMs at such tasks are often hindered by the lack of high-quality agent data from the corresponding environments they interact with. We propose LEARN-BY-INTERACT, a data-centric framework to adapt LLM agents to any given environments without human annotations. LEARN-BY-INTERACT synthesizes trajectories of agent-environment interactions based on documentations, and constructs instructions by summarizing or abstracting the interaction histories, a process called backward construction. We assess the quality of our synthetic data by using them in both training-based scenarios and training-free in-context learning (ICL), where we craft innovative retrieval approaches optimized for agents. Extensive experiments on SWE-bench, WebArena, OSWorld, and Spider2-V spanning across realistic coding, web, and desktop environments show the effectiveness of LEARN-BY-INTERACT in various downstream agentic tasks — baseline results are improved up to 11.1% for ICL with Claude-3.5 and 23.1% for training with Codestral-22B. We further demonstrate the critical role of backward construction, which provides up to 10.6% improvement for training. Our ablation studies demonstrate the efficiency provided by our synthesized data in ICL and the superiority of our retrieval pipeline over alternative approaches like conventional retrieval-augmented generation (RAG). We expect that LEARN-BY-INTERACT will serve as a foundation for agent data synthesis as LLMs are increasingly deployed at real-world environments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Data synthesis", "Agent", "Adaptation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7412b941e476bbd678f1eccc25d30eec4dd483b3.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/4ca2d8c8e9c8d9475a0dd3508eaea68acc183b3a.zip" }, "title": { "value": "Learn-by-interact: A Data-Centric Framework For Self-Adaptive Agents in Realistic Environments" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3UaOlzDEt2
Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion
main
Active
Video-Language Reasoning;Video Question Answering;Multimodal Fusion;Parameter-Efficient Fine-tuning
applications to computer vision, audio, language, and other modalities
5;5;6;6;6
4;4;5;4;4
2;3;3;3;3
2;2;3;3;3
3;2;3;3;3
5.6
4.2
2.8
2.6
2.8
0.408248
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper proposes a framework capable of handling multiple modalities and addresses the issue of token quantity increasing with the number of modalities.\n\n- A single Q-former is used to process multiple modalities, avoiding the large increase in parameters typically associated with multi-modal input. Each modality requires only a small amount of modality-specific parameters, and since the parameters for each modality within the Q-former are independent, processing different modalities does not cause interference.\n\n- The modality-sequential and modular training approach accommodates the differences across various modalities, preventing overfitting or underfitting to any specific modality.\n\n- The paper demonstrates through multiple benchmarks that the proposed framework effectively integrates information from diverse modalities, thereby enhancing video reasoning capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces CREMA, an efficient and generalizable framework for video-language reasoning that enhances understanding through multiple modalities, including video, depth, audio, and 3D point cloud data, among others. CREMA employs a modular fusion approach, with lightweight, modality-adaptive modules that allow for easy integration of new modalities with minimal added parameters. The framework also incorporates a novel self-gated attention fusion technique to reduce computational demands. Additionally, it proposes a modality-sequential modular training and adaptive early exit strategy to boost training efficiency and enable faster adaptation to new modalities. CREMA demonstrates superior performance across multiple benchmarks, such as SQA3D, MusicQA, NExT-QA, TouchQA, and ThermalQA, highlighting the benefits of integrating diverse input modalities for improved video reasoning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I believe the main focus of this paper is on ensuring that the number of tokens input into the LLM does not increase linearly with the number of modalities, while maximizing parameter sharing across modalities to avoid excessive parameter growth. However, I feel that the teaser image does not effectively highlight these key points.\n\n- My biggest concern lies with the Self-gated Multimodal Query Fusion. This module concatenates tokens from different modalities along the channel dimension, meaning that the input modalities during inference must match those used in training exactly—neither more nor less—otherwise, there will be a parameter mismatch within the Self-gated Multimodal Query Fusion. Many videos, for example, may not contain point cloud information; however, if point cloud data was included as input during training, it must also be part of the input during inference. This limitation significantly restricts the flexibility of input modality types.\n\n- Additionally, the description of the zero-shot setup is not clear enough. Before performing zero-shot evaluation on SQA3D and MUSIC-AVQA, which datasets were used to train and optimize the model's new parameters? Furthermore, as mentioned above, I believe that the Self-gated Multimodal Query Fusion limits the model's zero-shot reasoning capabilities, as different combinations of input modalities would require different models. This implies that different models were likely used for zero-shot evaluation on SQA3D and MUSIC-AVQA. Therefore, the authors should clarify which specific model was used for evaluation in each experiment.\n\n- Some related works on integrating multiple modalities are missing, such as MultiPLY[1] and X-VILA[2], both of which are multimodal LLMs capable of handling various input modalities. The authors should discuss the relationship with these works.\n\n[1]. MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World\nYining Hong, Zishuo Zheng, Peihao Chen, Yian Wang, Junyan Li, Chuang Gan\n\n[2]. X-VILA: Cross-Modality Alignment for Large Language Model\nHanrong Ye, De-An Huang, Yao Lu, Zhiding Yu, Wei Ping, Andrew Tao, Jan Kautz, Song Han, Dan Xu, Pavlo Molchanov, Hongxu Yin" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is clear writing and easy to follow.\n2. Few current works focus on integrating multiple modalities, so the authors' motivation is commendable.\n3. I appreciate the paper's innovation. Although it may not introduce many new structures, the modality-adaptive early exit strategy appears to have broad application potential. It's the first time I've seen the use of gradients to determine whether to exit early, and it is also the first method to apply early stopping by modality. Therefore, I acknowledge the paper's innovative approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes CREMA, a generalizable and modular modality-fusion framework that augments multiple modalities without extra human annotation and incorporates them into a query transformer, enhancing video reasoning. It introduces a progressive multimodal fusion design, maintaining computational efficiency and improving performance. Validated on 7 video-language reasoning tasks, CREMA outperforms or matches strong multimodal LLMs while significantly reducing trainable parameters, demonstrating its effectiveness and innovation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Overall, I believe this paper is worthy of acceptance and presents no significant issues. My only curiosity, as mentioned by the authors in the limitations section, is whether the method can be applied to more advanced baselines such as LLava, rather than just BLIP. If feasible, I would appreciate the authors addressing this point, which could lead me to adjust my score upwards." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Q1: Why CREMA is called a video-language model? For example, SQA3D mainly uses RGBD as input, and the authors call CREMA a video-language model because the visual information is formatted as a video sequence.\n\n* Q2: Although the authors have compared the trainable parameter number, it is arguable what is the number of total parameters, as LORA is used. The questions is: what is the total number of parameters, and what is the speed of inference?\n\n* Q3: It is interesting to see that modalities of depth or surface normal are used, or even helpful, for MUSIC-AVQA and NExT-QA. I suggest the authors provide analysis or visualizations of how such modalities benefit the models." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* S1: The presentation of this paper is straightforward and clear.\n\n* S2: The proposed fusion approach with Q-former (architecture) and modality-sequential training (training recipe) are both reasonable and looks simple for other researchers to follow.\n\n* S3: The evaluation covers various domains, including audio, point clouds, optical flows, etc. The approach CREMA has demonstrated competitive performance across these scenarios, especially when the number of modalities is large." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method, \"CREMA,\" that addresses the problem of video understanding with diverse modalities, including optical flow, point clouds, audio, etc. CREMA first uses modality-specific encoders to encode each modality. Then CREMA introduces a Q-former to extract features from each modality. Before feeding the features into LLMs, CREMA further leverages a self-gating modality fusion guided by the video features. Such an approach has the advantage of significantly less trainable parameters and competitive performance across multiple datasets, including MUSIC-AVQA, SQA3D, etc." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* W1: This paper lacks sufficient quantitative or qualitative analysis on why multi-modality assists the model. For example, the MUSIC-AVQA performance in Table 1 can benefit from depth and surface normal information, which is not very intuitive. Therefore, some visualizations or other formats of analysis the authors see fit will greatly enhance the motivation here. I saw Figure 3 and the analysis provided by the authors. However, it is unclear whether the learned Q-former indeed considers these modalities, as indicated by Sec. B.7. Since the author uses self-gate to fuse the modalities, is it possible to analyze the model's reliance on certain modalities with attention scores?\n\n* W2: Following the previous point, the increased number of trainable parameters with more modalities makes it complicated to confirm whether the additional modalities are indeed helpful. For example, adding depth and normal information increases the trainable parameters from 9M to 38M." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "●Q1: In Line 177, the paper states that the Q-Former \"extracts the most informative features from the input modality and removes any irrelevant information\" by generating fixed-length tokens. However, the fixed-length constraint may risk omitting crucial details, particularly for modalities rich in information. To substantiate the claim of extracting only the most informative features, it would be beneficial to include empirical evidence or an ablation study comparing different token lengths and their impact on performance across modalities. \n\n●Q2: In Lines 194-195, the authors mention adding a fully connected layer (shown as a dashed box in Figure 1) for each data type when dimension misalignment occurs. Could you clarify why a fully connected layer was chosen over a potentially lighter-weight approach like interpolation? A fully connected layer seems more computationally intensive, so I am curious about the specific advantages it offers in this context. To clarify the advantages of this design choice, it would be helpful for the authors to provide a brief comparison, perhaps in terms of computational cost and performance, with lighter-weight options such as interpolation. \n\n●Q3: In Line 233, the authors select video queries as the \"major\" modality, with other modalities as \"supportive,\" explaining this choice as mirroring human perception in video reasoning tasks. Could you clarify the rationale behind prioritizing video in this way? Additionally, was a sensitivity analysis conducted to verify the impact of this design choice? I am curious whether this prioritization consistently benefits performance across tasks or if certain scenarios might require a different modality emphasis. To support this prioritization, the authors could consider presenting results from ablation studies or sensitivity analyses across various tasks and modality combinations, demonstrating whether prioritizing video consistently enhances performance or if other scenarios might benefit from different modality emphasis. \n\n● Q4: In Line 259, the authors describe decomposing the back-propagation process for different modalities in each iteration. Could this approach limit the model’s ability to capture interactions between modalities, which is critical for vision-language tasks? It seems more like a trade-off for efficiency rather than a true remedy, as mentioned. This decomposition may prevent the model from fully learning cross-modal interactions and effectively fusing information across modalities. Could you clarify this design choice and its potential impact on performance?\n\n● Q5: In Line 456, the paper mentions achieving a 'regularization effect on the model through parameter-efficient updates.' Could you elaborate on the specific mechanisms or components within CREMA that contribute to this regularization effect? Additionally, how does this approach enhance model generalization across various modalities?\n\n● Q6: Could CREMA also accommodate remote sensing imagery as an input modality? Remote sensing images, captured from satellites or drones, provide detailed information on Earth’s surface across multiple spectral bands. If CREMA can process this type of data, would specific adaptations be needed to handle its unique spatial and spectral characteristics?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper introduces CREMA, a novel, parameter-efficient framework that enables the seamless addition of new modalities without altering the core architecture—a significant advantage over existing models like BLIP-2 and SeViLA, which rely on fixed modality inputs and require extensive parameters. CREMA effectively integrates diverse modalities, such as 3D, thermal, and audio data, by projecting them into a unified representation space interpretable by the model for reasoning.\n\nKey architectural innovations, including self-gated multimodal query fusion and sequential modality training, bring practical improvements to multimodal reasoning tasks, particularly in video-language applications. CREMA demonstrates broad applicability and efficiency across seven video-language reasoning tasks, achieving notable accuracy gains in VideoQA and 3D reasoning. Through reductions of over 90% in parameter requirements and optimizations like modality-sequential training and adaptive early exit, CREMA marks a significant advancement in multimodal reasoning, validated through extensive fine-tuning and zero-shot evaluations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes CREMA, a flexible and efficient framework for video-language reasoning that incorporates multiple modalities, including optical flow, audio, thermal maps, and 3D point clouds. CREMA addresses the limitations of current multimodal models that require extensive parameters and fixed modality inputs by introducing a modular, parameter-efficient design. This framework allows seamless integration of new modalities while reducing computational costs, validated by superior performance on seven diverse reasoning tasks compared to baseline models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "● Prioritizing certain modalities as primary lacks quantitative backing, which could benefit from sensitivity analysis to validate this design choice across diverse tasks.\n\n● The Q-Former generates fixed-length tokens for each modality to extract the most informative features and remove irrelevant information. However, this fixed-length constraint could risk omitting valuable details, particularly in modalities with high information density. \n\n● The decomposition of back-propagation by modality, while efficient, may limit the model’s ability to fully capture interactions between modalities, impacting the quality of multimodal reasoning." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Motivation behind adaptive early exit is not clear. The equation used on line 284 says if the gradient for a modality is larger than a threshold, then it will exit training. Shouldn’t it be smaller than a threshold since the gradient scale will be small after convergence?\n- Why using a sigmoid function in the fusion module in equation (3)? Seems it only does a scaling to the original q^{\\bar}_\\V which may not be necessary" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "-\tThe proposed model is a general framework for language QA based on multimodal video inputs. It achieves impressive performance on a wide range of tasks: audio-video QA, 3D situated QA, touchQA/thermal QA etc. \n-\tSome ablation studies are conducted for the choice and early exit strategy and modality fusion method (Table 7&8)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents a multi-modal LLM pipeline CREMA to joint learning from different modalities: visual, depth map, optical flow, audio etc that are synchronized with a video input. Built on top of existing multimodal encoders and LLM, it proposes modal-specific queries and a modality fusion module to incorporate inputs from different modalities while keeping a low trainable parameter scale. The model is evaluated on tasks require multimodal inputs: audio-video QA, 3D situated QA, touchQA/thermal QA etc. It outperforms existing methods that are using the multimodal inputs such as OneLLM and 3D-LLM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tThe performance of the proposed model seems to be dependent on the used multimodal encoders (ZoeDepth, Unimatch, NLL-AngMF to estimate depth, flow, normal, BEATs to encode audio, and ViT-G to encode visual). The comparison to existing methods might be unfair due to different encoders are used. More explanations are needed to verify this.\n-\tThe overall novelty is limited. The proposed model-specific queries and modality fusion module are subtle technical changes that does not bear a strong novelty." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024generalizable,\ntitle={Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3UaOlzDEt2},\nnote={under review}\n}" }, "abstract": { "value": "Despite impressive advancements in recent multimodal reasoning approaches, they are still limited in flexibility and efficiency, as these models typically process only a few fixed modality inputs and require updates to numerous parameters. This paper tackles these critical challenges and proposes CREMA, a generalizable, highly efficient, and modular modality-fusion framework that can incorporate many new modalities to enhance video reasoning. We first augment multiple informative modalities (such as optical flow, 3D point cloud, audio, thermal heatmap, and touch map) from given videos without extra human annotation by leveraging sensors or existing pre-trained models. Next, we introduce a query transformer with multiple parameter-efficient modules associated with each accessible modality. It projects diverse modality features to the LLM token embedding space, allowing the model to integrate different data types for response generation. Furthermore, we propose a novel progressive multimodal fusion design supported by a lightweight fusion module and modality-sequential training strategy. It helps compress information across various assisting modalities, maintaining computational efficiency in the LLM while improving performance. We validate our method on 7 video-language reasoning tasks assisted by diverse modalities, including conventional VideoQA and Video-Audio/3D/Touch/Thermal QA, and achieve better/equivalent performance against strong multimodal LLMs, including OneLLM, BLIP-2, and SeViLA while reducing over 90% trainable parameters. We provide extensive analyses of CREMA, including the impact of each modality on reasoning domains, the design of the fusion module, and example visualizations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Video-Language Reasoning", "Video Question Answering", "Multimodal Fusion", "Parameter-Efficient Fine-tuning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a89fcb225996ee12d18fce41702cd0d2d3e3c863.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3fb72ed604a2c59b892c345c44049415bd578126.zip" }, "title": { "value": "Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3UqIo72Ysq
Representations in a deep end-to-end driving model predict human brain activity in an active driving task
main
Active
fMRI;autonomous driving;human driver modeling;computational neuroscience
applications to neuroscience & cognitive science
3;5;5;5;5;6
2;3;3;5;4;2
2;2;2;3;3;2
2;2;2;3;3;3
2;3;2;3;3;3
4.833333
3.166667
2.333333
2.5
2.666667
0.203005
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Did the authors use one person for each train/validation/test split used in this paper, to avoid data leakage?\n2. How difficult and how long did it take to collect this dataset for these 3 people? How feasible would it be to extend this experiment to a larger number of people to more strongly evaluate this work?\n3. Given that the fMRI captures delayed blood-oxygenation responses, do the authors think that a higher temporal resolution imaging method like EEG could help? \n4. Isn't the combined $R^2$ of just 0.02 in figure 2 too small to find true alignments between the DNN activations and distinct brain networks? How did the authors choose this value?\n5. The paper highlights, in Section 2, some literature connecting fMRI signal with brain activity on driving tasks. Doesn't it mean that the last sentence in introduction (\"Our results are an exciting **first step** towards investigating the cognitive and representational basis for human and AI driving\") is a bit of an overstatement? (I mean given the usage of the term \"first step\")\n6. In section 3.2.1, the paper mentions some apparent modifications to the original LAV implementation, and that \"reasonable inferences\" were verified. Can the authors please provide more details on why and how the LAV model was modified, and what \"reasonable inferences\" mean (eg, how it is defined and evaluated)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "To the best of my knowledge in this applied field, I believe this work surely pushes forward the intersection of neuroscience and machine learning representation; in this sense, and despite the paper \"looking different\" from typical ICLR papers, I believe this point is in itself a strength of this paper to be accepted at ICLR. \n\nThe choice of Learning from All Vehicles (LAV), a competitive model in autonomous driving, strengthens this study’s relevance; LAV’s multi-module structure allowed the authors to link specific network modules to brain regions performing analogous roles. Another significant strength of this work is how this work was devised and how it collected all the data from an actual fMRI machine to be able to explore the active driving paradigm, instead of the more usual passive tasks in previous literature. \n\nIn my opinion, this paper is original in its methodological developments and how it tackles a clear gap in the literature with a creative combination of rigorous statistical methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the alignment between human brain activity in the context of autonomous driving and the activations of different modules of a specific deep neural network (Learning from All Vehicles - LAV). Human brain activity was captured in the form of functional magnetic resonance imaging (fMRI), and the alignment was performed through Voxelwise Modeling (VM), previously introduced in the literature. This paper argues that both the deep neural network and the human brain may partition the task of driving in a similar way, by showing that each specific LAV module (semantic segmentation, Bird's-eye-view perception, planning, trajectory prediction, hazard detection, and control) was able to predict different meaningful brain areas." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Even though I really enjoyed reading this out-of-the-box paper, and even though I can imagine the insightful discussions this might bring among people attending ICLR, I am afraid this might not be enough for this paper to be accepted at a conference like ICLR. One key point I want to make on this (beyond the weaknesses I list below), is that I believe that a person from the field of neuroscience would be necessary for properly analysing this paper. Section 4 contains a lot of discussions and results focused on brain regions and specific neuroscientific knowledge that I believe it might be difficult to find in ICLR; evaluating this section seems important to me to really understand the contribution and novelty of this paper, which again supports my point that maybe this might not be the best venue for this paper. A more multidisciplinary journal focused on neuroimaging where truly diverse peer reviewers might be easier to find, might be better.\n\nWith regards to actual weaknesses that I have found in this paper:\n1. In a conference focused on (computational) representation learning, I find that the dataset size of just 3 people is too small for us to trust these results. In order to avoid data leakage, this basically means that one person would be in the training set, another in the validation set for hyperparameter selection, and another in the test size, which in my opinion hinders the potential trust one has in these results as we might not have enough individual variability in brain function in such a complex task like driving. Even though the paper is clearly innovative in its methodological approach, it also contains a clear weakness in providing enough people to truly evaluate its results. Obviously this is not possible to tackle in the rebuttal period, but I think the authors do not provide enough details on how they consider the dataset (small) size in their experiments, and how potential overfitting was avoided.\n2. One thing that I believe it's difficult to really evaluate here, and thus it's a weakness of this work, is that these correlations might not necessarily imply functional similarity. Some correlations might come from shared contextual factors (I can think for instance vehicle proximity or visual field overlap) rather than true alignment. I do not know in detail some of the methods applied in this paper, so I was wondering whether the authors could comment on how to potentially tackle this weakness?\n3. The paper makes quite a strong statement when it suggests that both the DNN and the human brain may partition tasks in a similar manner. This in itself is a difficult claim to truly evaluate when only looking at one DNN model. I'm not sure whether other autonomous driving models are divided in such well-separated modules, and thus it would be important for the paper to include some discussion on the feasibility of applying this framework into other autonomous driving models currently being used in the real world." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Have the authors considered exploring different DNN architectures (e.g., reinforcement learning-based models) to assess if similar regions align across architectures?\n\nCould further studies investigate other interactive tasks, such as social navigation, to see if similar alignment patterns appear in non-driving contexts?\n\nHow might the approach handle potential biases from strong correlations in interactive tasks, and are there additional measures to mitigate this?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper's strengths are highlighted by its innovative integration of neuroscience with machine learning, providing valuable insights into how DNNs may emulate human cognitive processes during complex tasks like driving. The rigorous experimental design, which includes detailed comparisons between brain activity and DNN outputs, enhances the reliability of the findings. Additionally, the alignment of DNN modules with specific functional brain regions suggests a meaningful correspondence between artificial and biological systems, indicating potential pathways for future research in both AI development and cognitive neuroscience." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the relationship between human brain activity and deep neural network (DNN) activations during an active driving task, specifically using a simulated taxi-driving environment. By employing functional magnetic resonance imaging (fMRI) to capture brain activity, the authors construct voxelwise encoding models that correlate DNN activations from the Learning from All Vehicles (LAV) model with brain responses. The findings indicate that DNN features can explain significant variance in brain activity across various regions, suggesting a parallel in how both systems process complex sensorimotor tasks. This work represents a new effort to bridge insights from neuroscience and artificial intelligence, particularly in understanding cognitive processes during active driving." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The findings rely solely on the LAV driving DNN. Testing multiple DNNs trained with different objectives or architectures could strengthen claims about human-AI alignment in driving. \n\nThe experiment’s setup, where humans control the stimulus, introduces correlations that may not reflect true alignment in representations, limiting the generalizability of the findings.\n\nWhile the voxelwise approach is rigorous, the dense presentation and minimal interpretative context might be difficult for a broader ML audience, not sure if ICLR is the best venue for this work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The method is straightforward and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a comparison between human brain activity, measured through functional magnetic resonance imaging (fMRI), and activations within deep neural networks (DNNs) during an active taxi-driving task in a naturalistic simulated environment. The study aims to enhance our understanding of the similarities and differences between human cognition and current deep-learning methods in active, closed-loop tasks such as driving." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper focuses on application rather than theoretical innovation. Here are a few questions and considerations regarding the methodology:\n\nThe sample size is limited to only three subjects. Is this sufficient to establish a reliable confidence level in the findings?\n\nWhy was a deep neural network (DNN) chosen over alternative models? Would other models potentially offer comparable or better insights?\n\nThe rationale for using the selected model, such as the VM model, remains unclear. Could you clarify the insights driving this choice?\n\nWhat methods were employed to assess the credibility and robustness of the model? How can we be confident in its generalizability and accuracy?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The authors may include more information about data processing and mapping in the supplement. \n2. More details about the quantitative analysis could be included. \n3. The authors may include some comparison with the non-specific encoding models. \n4. How do you align the driving pattern between human and AI? Are they aligned with the same frame or action? As the performance is measured by R^2, is it selective to the current BOLD? What's the difference if you map it to a resting or passive natural stimulus? To what extend is the signal driven by the movement? Some related work could be helpful for the comparison and argument here about the selection and representation, such as:\n [1] https://www.nature.com/articles/s41467-024-53147-y\n [2] https://www.nature.com/articles/s42256-023-00753-y?fromPaywallRec=false\n [3] https://www.sciencedirect.com/science/article/pii/S2095927324001373\n [4] https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Brain_Decodes_Deep_Nets_CVPR_2024_paper.html" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The dataset is quite new. \n2. The visualization is clear and neat." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this article, the authors present an interesting attempt of aligning the auto-driving neural network with the human brains scanned when driving. This experiment is a new design and allows for the exploration of new topics in the field." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The sample size is relatively small. I understand the difficulty here and I guess the whole collection is still in the early stage? \n2. The goodness of mapping is not well evaluated. \n3. The comparison with other methods and infrastructure is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How well does the LAV model explain brain responses compared to non-DNN baseline models, such as those studied in Strong et al., 2024.\n2. How well does the LAV model explain brain responses compared to other DNN models? For example, some basic baseline DNN models, such as an ImageNet-trained CNN. Or some alternative driving DNN models.\n3. How can we more rigorously measure whether a computational model and the brain partition the task similarly?\n4. How well does the behavior from the computational model align with human behavior?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper studies human neural activities in a complex interactive driving task. It investigates to what extent a functional model of driving—and its different submodules—explains/predicts different parts of the neural data. Many works in the past investigated how deep neural network models align and/or predict neural responses, but most previous studies focused on perception, reasoning/planning, or control separately, and the tasks were usually much simpler. This work studies driving, a complex interactive behavior involving perception, planning, and control. Going from simple, passive tasks to complex, multifaceted tasks has significant originality. Meanwhile, developing capable computational models and comparing different facets of the model to the brain involves a lot of hard work and innovation in methodology, and this work made progress in that direction. The finding that different submodules of the LAV model explain variance in brain responses in different regions is a novel finding and invites further studies to understand the exact functional roles of different brain regions during a complex task such as driving." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studied how the representations in a deep learning model for autonomous driving can predict human brain responses in an interactive close-loop driving task. They recorded human subjects' brain activities using fMRI while they engaged in a driving simulation task. They extracted activations from artificial neurons in the deep network model receiving stimuli similar to human subjects and used these activations to regress against brain activities. They found that overall, the model explains variances of brain responses across many brain regions in held-out data. They further investigated how different modules in the deep learning model, such as semantic segmentation, planning, hazard detection, and control, explain different parts of the brain responses the best. They found that semantic segmentation and hazard detection modules best predict the visual areas, the planning module best explains variance in the sensorimotor areas and IPS, and the control module is similar to the planning module and, in addition, explains variance in RSC and PPA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the task, model, and analysis methods are novel, it is hard to know what we have learned scientifically from the analysis, mainly due to a lack of control experiments and alternative models. I see the central claims in this paper as the following two points.\n\n1. encoding models for DNN activations explain significant amounts of variance in brain activity across many regions of the brain\n2. each functional module in the DNN explains brain activity in a distinct network of functional regions in the brain, ..., suggesting that both the DNN and the human brain may partition the task in a similar manner.\n\nClaim 1 is not novel since it is generally expected that a DNN model can account for variance in neural response, especially when these models are trained to perform the same task. Even randomly initialized DNN models can explain some variance in the brain. Given that, it is essential to see how well the LAV model explains variance compared to other models. Does LAV predict brain activities better in a particular region, or does it predict activities in a broader range of areas? For example, the author can compare the LAV model to those non-DNN models studied by Strong et al., 2024., and it would be helpful to have more DNN control models, such as a CNN trained on ImageNet classification or a randomly initialized CNN model.\n\nWhile this paper did show that different submodules of LAV explain variance in different brain regions, the claim that the brain and LAV partition the task in a similar manner is only poorly supported. This is primarily due to a lack of clarity on what \"partitioned similarly\" means. From the presented data, the semantic segmentation and hazard detection modules explain the neural responses in the visual areas. The planning and control modules explain a largely overlapping set of brain regions. These results suggest that the functions performed by these modules are not as clearly segregated in the brain as in the LAV model. Establishing a clear metric to assess whether the brain exhibits a similar functional partitioning as the tested model would be beneficial. This could involve developing a measure of the degree of functional segregation in the model that aligns with brain regions. Adding alternative models or control models would certainly help. For example, there might be a hypothetical model A, whose sub-modules predict all brain regions equally well. Then, it is acceptable to conclude that the LAV model partition is more brain-like than model A.\n\nAdditionally, while this paper mainly focuses on analyzing the neural data, it does not provide any behavioral results. It is hard to see the model as a good model of the brain if it does not perform the task well or does not match human behavior well. It would be helpful to see how well the LAV model is aligned with humans behaviorally. For example, the navigation decisions between the LAV model and human subjects can be compared when given the same simulator inputs.\n\nReference:\n\nStrong, C., Stocking, K., Li, J., Zhang, T., Gallant, J. and Tomlin, C., 2024, June. A framework for evaluating human driver models using neuroimaging. In 6th Annual Learning for Dynamics & Control Conference (pp. 1565-1578). PMLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No" }, "flag_for_ethics_review": { "value": [ "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the variability across subjects? When the authors mentioning the group-level performance, does that mean average across subjects?\n\n2. How does the random projection matrix affect the results?\n\n3. Is there any statistical measure quantifying the significance of the better predictive ability of one brain region compared to other regions?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The topic comparing the brain activity to a autonomous driving model is quite new to the field which can be insightful for understanding the brain activity during planing, decision making. The submission collects the data with this new system is a good start point for the following research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the alignment between deep learning models and brain activity. Unlike previous studies, which examine the alignment of visual or language models with brain activity, this work explores a deep learning model for autonomous driving. Specifically, the paper utilizes the LAV model, which has clearly separated functional modules, including semantic segmentation, bird's-eye view perception, planning, trajectory prediction, and hazard detection. The outputs from each module demonstrate varying predictive capacities across functionally distinct brain regions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Though the topic is new, mapping an autonomous driving model with distinct functional modules to different brain regions is promising, but the current results are not yet strong enough. For instance, the control module outputs show high predictive ability across multiple brain regions, it would be beneficial if the authors could demonstrate whether these regions are consistent across random sees, subjects and providing some statistical significance measure.\n\nAdditional concerns are as follows:\n\nThe authors performed regression analysis to align LAV model outputs with brain activity. It would be helpful to clarify whether the observed distinct predictive abilities are specific to the LAV model or if they generalize across other autonomous driving models, such as that proposed by Li et al., 2024 [1].\n\n[1] Li et al., 2024, https://arxiv.org/html/2406.08481v1.\n\nPredictive ability is a coarse measure, as it only indicates that the variability in model outputs aligns with the variability in brain activity. This makes it difficult to draw conclusions such as \"representations learned by the driving DNN may be similar to those used by the human brain.\" The authors could explore additional metrics beyond regression fitting to better align brain activity, such as fMRI, with artificial neural networks. A discussion on the impact of metrics on alignment-related conclusions would also be beneficial [2].\n\n[2] Soni et al., 2024, https://www.biorxiv.org/content/10.1101/2024.08.07.607035v1.full.pdf.\n\nWhile the topic is interesting, current technical contribution is not very significant." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024representations,\ntitle={Representations in a deep end-to-end driving model predict human brain activity in an active driving task},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3UqIo72Ysq},\nnote={under review}\n}" }, "abstract": { "value": "Understanding how cognition and learned representations give rise to intelligent behavior is a fundamental goal in both machine learning and neuroscience. However, in both domains, the most well-understood behaviors are passive and open-loop, such as image recognition or speech processing. In this work, we compare human brain activity measured via functional magnetic resonance imaging with deep neural network (DNN) activations for an active taxi-driving task in a naturalistic simulated environment. To do so, we used DNN activations to build voxelwise encoding models for brain activity. Results show that encoding models for DNN activations explain significant amounts of variance in brain activity across many regions of the brain. Furthermore, each functional module in the DNN explains brain activity in a distinct network of functional regions in the brain. The functions of each DNN module correspond well to the known functional properties of its corresponding brain regions, suggesting that both the DNN and the human brain may partition the task in a similar manner. These results represent a first step towards understanding how humans and current deep learning methods agree or differ in active closed-loop tasks such as driving." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "fMRI", "autonomous driving", "human driver modeling", "computational neuroscience" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/87c480698e256ddb607d496823e23a0a36fcbaba.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ffe773cf379b039810e58a0761a78ccc438ad187.pdf" }, "title": { "value": "Representations in a deep end-to-end driving model predict human brain activity in an active driving task" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3VD92FuNCd
Measuring the Contribution of Fine-Tuning to Individual Responses of LLMs
main
Active
Large Language Models;Interpretability;AI Safety
foundation or frontier models, including LLMs
3;5;6;8
3;3;3;4
2;3;3;3
2;2;3;3
2;2;3;3
5.5
3.25
2.75
2.5
2.5
0.800641
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Does a formulation which aligns more closely with Prop 4.5 have worse empirical performance?\n- Why is PTC defined as the sum of $PTC(x^{FT}_s, s)$ rather than $PTC(x^{PT}_s, s)$?\n\nComments:\n- If possible, the typesetting of Proposition 4.5 should be improved.\n- On line 323, $PreCo(x)$ should be defined as $1 - TuCo(x)$" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors present a novel metric, TuCo, for identifying the relative contribution of fine-tuning on a given sample.\n- The authors present evidence that TuCo is a useful tool in the analysis of model behavior. In particular, jailbreaks tend to decrease the contribution of fine-tuning as measured by TuCo, which obtains strong results in terms of discriminating between jailbroken and unmodified prompts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel measurement of the relative contribution of fine-tuning on a sample derived from the difference between the effects of the pretrained model and the full pretrained model in each layer, and it shows that this metric can be used to identify jailbreaks and that intervening on it can be used to steer model behavior." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The gap between the formulation of Prop 4.5 and the definition of TuCo is not adequately explained: many alternative formulations are possible. In particular, it should be made clear why the proposed formulation is the right one.\n- Simpler baselines are not considered:\n - For example, a simpler approach might take only differences between the final hidden states of the two models into account.\n - Such an output-only definition is equivalent to a variation of TuCo which takes the compositional structure of the pretrained model into account.\n- Along these lines, it is unclear why it is better to view the differences between layers l of the two models in isolation, ignoring the compositional effect of the deviation between the two models. \n- While it suffices to represent the decomposition into PTC and FTC, it is unclear that the notation presented in 4.2 and 4.3 is a natural way to represent the decomposition of a model into circuits. In particular, the notation hides the compositional structure of the circuits in $C_1$ and necessitates that when taking composition into account, the circuits are no longer disjoint.\n- Proposition C.1 (iii) appears to be incorrect: the proof claims that the equation on line 1002 holds for arbitrary disjoint $C_1$ and $C_2$. This appears to instead be a required assumption. For a trivial counterexample, consider scaling the components in $C_1$ by a constant factor and subtracting the difference from those of $C_2$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The interpretation of models remains a persistent and significant challenge in the field of deep learning.\n\n2. Fine-tuning LLMs has become a prevalent practice. Elucidating the mechanisms of LLM fine-tuning could potentially enhance this process, thereby contributing to the broader understanding and application of these sophisticated models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work studies how fine-tuning LLMs contributes to the individual response. The authors propose a decomposition of post-trained LLM into a pre-training component and fine-tuning component and define a Tuning Contribution of these two components. Empirical evaluation shows that TuCo is sensitive to language model inputs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Overall the work is ad-hoc.\nThis study introduces and quantifies several metrics across diverse contexts. However, it appears to lack novel insights into LLM fine-tuning or practical guidelines. For the observed disparities in model outputs across various inputs (for example, among different languages, or harmful prompts with and without adversarial strings), because the outputs are different in those settings, it is not hard to define quantities that distinguish them. In addition, while Proposition 4.5 establishes a theoretical bound on these metrics, its practical application or utility within the study remains unclear.\n\n\n2. The paper is not well-written. The study presents multiple definitions and evaluation frameworks; however, the organization appears arbitrary, lacking a cohesive and succinct narrative. Moreover, the introduction of a novel metric within the evaluation section deviates from conventional structure, potentially compromising the clarity and flow of the presented research." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I appreciate the contribution of this paper, and I only have some minor questions mentioned in the box of Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I believe that this paper has its own contribution. While the basic idea is simple, the authors show that it can truly reveal the behaviours of models, making it a very useful tool in understanding the consequences of fine-tuning in practice. The authors also provide some theoretical analysis to support their claim, further solidifying their findings. Moreover, the experimental analysis looks sound to me, and the results quite align with my intuitions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper aims to quantifying the impact of fine-tuning on individual outputs of LLMs. The authors propose TuCo, a metric designed to measure the contribution of fine-tuning to an LLM's output for any given prompt. The key idea is to decompose the fine-tuned model's representations into two components: a) PTC: The output of the corresponding layer in the pre-trained model, and b) FTC: The difference between the outputs of the fine-tuned model and the pre-trained model at each layer. The paper provides a theoretical foundation for this decomposition and demonstrates that the relative magnitudes of the PTC and FTC can bound the discrepancy between the outputs of the pre-trained and fine-tuned models. Empirically, the authors validate their approach by: a) Scaling the FTC: Showing that adjusting the magnitude of the FTC can steer the model's behavior and performance on specific tasks. b) Analyzing Adversarial Attacks: Investigating how three types of jailbreak attacks affect TuCo. The findings suggest that these attacks reduce the TuCo, meaning they attenuate the effect of fine-tuning and exploit the model's pre-training behaviors to circumvent safety measures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I wonder if additional discussion about the difference between TuCo and robust fine-tuning (https://arxiv.org/abs/2109.01903) / task vectors (https://arxiv.org/abs/2212.04089) would be beneficial. It seems that the difference is that previous works typically attenuate the effects of fine-tuning by parameter scaling, while your work employs output scaling, especially for the section 5.1 - 5.2.\n\n2. The authors mainly focus on the quantitive analysis in the main body of the paper. Considering that many of the adopted metrics for LLMs can be misleading, is it possible the authors further provide some qualitative analysis for the results, especially echoing Figs 3-4. For example, what the model output changes across different values of alpha. Is it possible that the improper choices of alpha will make model output random characters or nonsensical strings? \n\n3. Intuitively, I think the paper may have some interesting contributions to the community beyond the mentioned ones in the main content and conclusion. I wonder if the authors could discuss more about the potential usages and applications of TuCo in practice. \n\n4. I also found a small typo in the section page: Perez et al (2022) should changed to (Perez et al 2022)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the author explain or discuss more about the theoretical implications behind the proposition results?\n2. Could the author also analyze the computational cost of TuCo and discuss why you only consider the magnitude of the fine-tuning component on the last token's hidden state as represented by the function $proj_n(\\cdot)$?\n3. please refer to the third point in the weakness." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper focuses on quantitatively investigating the effects of fine-tuning in individual prompts, which is, at least from the perspective of the reviewer, a new and novel research problem, and provides insights on understanding the model behavior and performance from a systematic concept framework.\n2. This work provides a decomposition of a fine-tuned LLM as an embedding superposition of a pre-training component and a fine-tuning component, leveraging the residual architecture of Transformer LLM. It is reasonable and extendable for further analysis of model behavior understanding.\n3. In general, the illustration is clear and provides an intuitive explanation of the decomposed two-component and the analytic framework, and the computation of Pre-prompt tuning contribution is also easy to understand.\n4. Both theoretical analyses based on the generalized decomposition and the empirical results with jailbreak attack are provided to demonstrate the effectiveness of TuCo, and provide some further insights on understanding model behavior and for the safety of LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on quantitatively analyzing the effect of fine-tuning on individual outputs of large language models (LLMs). To be specifical, this work introduces a decomposition of a fine-tuned model into a pre-training component and a fine-tuning component, through which it presents that the model behavior can be steered by up-/down-scaling the fine-tuning component during the forward pass. Based on that, this work proposes Tuning Contribution (TuCo) in terms of the ratio of magnitudes and investigate its utility on adversarial attacks for LLMs. Both empirical and theoretical results are provided to demonstrate the rationality of the proposed TuCo and provide in-depth insights into a quantitative study of how fine-tuning influences model behaviors." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although this work provides the canonical decomposition of a fine-tuned model with the theoretical results based on the gronwall bound, the current version provides limited implications behind the derived proposition, making it hard to understand the significance of the analytical results, and draw further insights on the analysis.\n2. The computational cost of TuCo is not considered in experiments, and it would be better if the current version could incorporate another detection method to have an empirical comparison with TuCo for detection tasks, which can provide more convincing results on the effectiveness of TuCo.\n3. I do not very understand why the decomposition can be regarded as exact decomposition, and is there any gap between the idealized setting stated at section 4.2 for the motivation? as the authors state it is informally motivated." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We give a principled metric quantifying how much the fine-tuning stage contributed to the output of an LLM, and explore its relationship to model behavior and safety." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024measuring,\ntitle={Measuring the Contribution of Fine-Tuning to Individual Responses of {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3VD92FuNCd},\nnote={under review}\n}" }, "abstract": { "value": "Past work has studied the effects of fine-tuning on large language models' (LLMs) overall performance on certain tasks. \nHowever, a way to quantitatively and systematically analyze its effect on individual outputs is still lacking.\nIn this work, we propose a new method for measuring the contribution that fine-tuning makes to individual LLM responses, assuming access to the original pre-trained model. \nWe introduce and theoretically analyze an exact decomposition of any fine-tuned LLM into a pre-training component and a fine-tuning component.\nEmpirically, we find that one can steer model behavior and performance by up- or down-scaling the fine-tuning component during the forward pass.\nMotivated by this finding and our theoretical analysis, we define the Tuning Contribution ($\\mathrm{TuCo}$) in terms of the ratio of the magnitudes fine-tuning component and the pre-training component.\nWe find that three prominent adversarial attacks on LLMs circumvent safety measures in a way that reduces the Tuning Contribution, and that $\\mathrm{TuCo}$ is consistently lower on prompts where the attacks succeed compared to ones where they don't. \nThis suggests that attenuating the effect of fine-tuning on model outputs plays a role in the success of these attacks.\nIn summary, $\\mathrm{TuCo}$ enables the quantitative study of how fine-tuning influences model behavior and safety, and vice versa." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Interpretability", "AI Safety" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/570a0d8e578f86dc9f7e3126b45299a7a4cfe258.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a140b00434afa313541165bb8a1ad987673beb38.zip" }, "title": { "value": "Measuring the Contribution of Fine-Tuning to Individual Responses of LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3VOKrLao5g
KAAN: Kolmogorov-Arnold Activation Network --- a Flexible Activation Enhanced KAN
main
Active
Kolmogorov-Arnold representation Theorem;Kolmogorov-Arnold Network;Multi-Layer Perceptrons
other topics in machine learning (i.e., none of the above)
3;3;5;6
3;3;4;3
3;2;3;3
2;1;2;3
2;3;3;3
4.25
3.25
2.75
2
2.75
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I would suggest authors to reevaluate the core contributions and rewrite the paper. If the main contribution is empirical in nature, I would suggest doing more experiments on transformer-like architectures or showing taks where MLPs or KANs fail to learn underlying functions correctly but the proposed method can.\n2. What is the meaning of “KAN faces the challenges of being unintuitive and inflexible.” This is a highly subjective statement, giving concrete examples of what inflexible and unintuitive means would help readers. Does KAAN help give more flexibility or intuition? If so, how? What is the takeaway?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is written clearly and concisely, and is easy to read. \n2. The proposed activation makes KANs more flexible and easy to deploy which would encourage the scientific community to experiment with these networks.\n3. Experiments clearly demonstrate that the proposed activation function allows KANs to be trained while achieving comparable performance to MLPs and even ResNets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors propose Kolmogorov-Arnold activations inspired from KANs (Kolmogorov-Arnold Networks) and replace B-splines in KANs to achieve similar or better performance than MLPs. Authors show that MLPs can be represented in a form conforming to Kolmogorov-Arnold representation Theorem (KAT). Using MLP-like equipped with Kolmogorov-Arnold activations, authors experiment and compare different basis functions. Experiments also demonstrate successful integration with Convolutional Neural Networks (CNNs) which achieving comparable performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Novelty is missing: KAN arxiv report (Liu et. al 2024) already gives a MLP-like interpretation of KANs which allows stacking of layers similar to MLPs which is similar to section 3 in the paper.\n2. Authors have essentially replaced splines, which is a core contribution of the original KAN paper (provides higher degree of control to model univariate functions) with learnable activation functions. There is already literature covering learnable activation functions with different basis like Polynomial or sinusoidal basis (in context of MLPs). Therefore I feel the paper doesn’t bring new insights into Neural Networks or KANs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What are the parameter counts/VRAM consumption/running time for the tested KAANs vs. MLPs/CNNs? \n2. Is it possible to compare KAANs with standard networks that use the same number of parameters?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed approach clearly works on the presented tasks, and in some cases provides a performance improvement.\n2. The KAAN parametrization is compatible with standard ANN architectures.\n3. Related to the previous point, this parametrization might be helpful for neural architecture search/meta-learning/similar approaches that adapt neural networks’ architectures, as the nonlinearity parameters are designed to be differentiable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Kolmogorov-Arnold Activation Networks, an extension of Kolmogorov-Arnold Networks, that uses MLP/CNN-like architecture with flexible activation functions defined for each edge between neurons." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "2. Memory and computation time requirements\n\nThe computational requirements of KAANs appear to be much higher than for corresponding standard MLPs/CNNs. Eq. 6 uses several weights per connection (one for each activation type) and additionally parametrizes the activations. This should increase both memory consumption and running time of KAANs compared to standard networks. \n\nThe increased number of parameters in KAANs also (unless I missed something) suggests the performance improvements (Tabs. 3-5) are very modest compared to standard networks that use several times fewer parameters. \n\n3. Lack of interpretability\n\nThroughout the paper, KAANs are called intuitive. However, I do understand how KAANs are more intuitive than standard MLPs (if anything, they are more convoluted). The results in Tabs. 3-5 indirectly confirm my concern: there’s no clear winner across different combinations of activation functions.\n\nLines 300-311 discuss the potential uses cases for each activation function, but all of those apply to standard ANN architectures that don’t define edge-based nonlinearities. \n\n4. Poor writing\n\nThe paper needs some writing improvements. Here are some instances I’ve noticed, although text needs overall polish.\n1. [Line 30] “There were not many breakthroughs until KANs” [rephrased] – I would disagree, and suggest Transformers as an obvious architectural breakthrough. But, the list can expand with for instance capsule networks (https://www.sciencedirect.com/science/article/pii/S1319157819309322) and gflownets (https://arxiv.org/abs/2111.09266). \n2. The introduction contains many terms, such as LANs and TANs, but they’re not cited until related work. \n3. “No many” instead of “not many” in line 30, extra bracket in line 81, typo in line 205, non-plural “Experiment” name for Sec. 5" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My opinion could shift towards acceptance if the authors could address one of the following points:\n\nDevelop a method to identify the most optimal combination of basis functions.\n\nFind a specific combination of basis functions that consistently outperforms others.\n\nDemonstrate that in certain specific tasks, KAANs offer a significant advantage." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The theoretical framework is elegantly and solidly constructed.\n\nIt points out that “MLP represents a specific instance of KAN”\n\nIt points out that“any continuous univariate basis functions can be used as activation function”\n\nKAANs offer greater flexibility and fewer limitations than traditional KANs, making them more adaptable to various structures.\n\nThe paper conducts extensive experiments across a multitude of AI-related tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel architecture named KAANs, which enhances the efficiency of MLP by incorporating a method inspired by KANs. Theoretically, the paper begins by establishing that MLPs are a subset of KANs and then deviates from traditional KANs by replacing B-spline activation functions with linear combinations of basis functions. Experimentally, the paper evaluates 7 different combinations of basis functions as activation functions across various AI-related tasks, demonstrating that KAANs achieve higher accuracy than both MLPs and KANs.\n\nWhile the theoretical foundation is robust and compelling, the KAANs just replace activation functions in MLPs with more complex functions. when trying to search for the optimal combination of basis functions along with the most effective weights, the concept go back to the learnable activation functions. Therefore, it appears that the paper has elegant theory but not enough contributions on practical level." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper experiments with various combinations of basis functions, where different combinations excel in different tasks. This variability raises questions about how to determine the most effective combination for a given task.\n\nAlthough KAANs outperform MLPs and KANs in the experiments, the comparison may not be entirely fair. The more complex activation functions used in KAANs require greater computational power compared to MLPs, potentially skewing the results. Similarly, comparing KAANs to KANs without adjusting for KANs' longer training requirements may not provide a balanced view of their respective efficiencies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It would be interesting to elaborate more on the perspective in Sec 3.2 and gain more motivation on the comparison between KANs and MLPs.\n\n2. How does (C)KAAN perform on more challenging tests?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors proposed a novel framework of viewing MLPs and a special case of KANs \n\n2. They conducted extensive experiments on challenging datasets including Tabular datasets and Cifar-10, and introduced a convolutional version as well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors proposed a novel framework of viewing MLPs and a special case of KANs and proposed as a inspiration KAAN, where each nonlinear activation function is parametrized by a linear combination of basis functions. They conducted extensive experiments on challenging datasets including Tabular datasets and Cifar-10, and introduced a convolutional version as well. The article presented an interesting perspective and should be treated as a nice improvement on KANs, with the following limitations.\n\n1. While KAAN seems interesting, it seems still such a way of parametrization of nonlinearity in KANs, with more complicated nonlinearity. This improvement is at best incremental and would need more support from numerical evidence.\n\n2. The referee would envision that KAANs suffer from less interpretability than KANs; especially on symbolic regression. Could the authors comment on this restriction?\n\n3. It would be interesting to elaborate more on the perspective in Sec 3.2 and gain more motivation on the comparison between KANs and MLPs.\n\n4. How does (C)KAAN perform on more challenging tests?" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While KAAN seems interesting, it seems still such a way of parametrization of nonlinearity in KANs, with more complicated nonlinearity. This improvement is at best incremental and would need more support from numerical evidence.\n\n2. The referee would envision that KAANs suffer from less interpretability than KANs; especially on symbolic regression. Could the authors comment on this restriction?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024kaan,\ntitle={{KAAN}: Kolmogorov-Arnold Activation Network --- a Flexible Activation Enhanced {KAN}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3VOKrLao5g},\nnote={under review}\n}" }, "abstract": { "value": "Kolmogorov-Arnold Networks (KAN) have led to a significant breakthrough in the foundational structures of machine learning by applying the Kolmogorov-Arnold representation theorem. Through this approach, the target conditional distribution is expressed as the summation of multiple continuous univariate B-spline functions. However, KAN faces the challenges of being unintuitive and inflexible. To address this issue, we analyze the structural configurations of Multi-Layer Perceptrons (MLPs) and KANs, finding that MLP can be represented in a form conforming to Kolmogorov-Arnold representation Theorem (KAT). Therefore, we propose MLP style KAN framework Kolmogorov-Arnold Activation Network (KAAN), which is more intuitive, flexible and transferable. To verify the flexibility and transferability of our approach, we extend it to Convolutional Neural Network (CNN). Also, we demonstrate that parameter sharing is beneficial not only for efficiency but also for effectiveness. KAAN shows better representation capacity than MLP on several benchmarks. Furthermore, our experiment results lead us to conclude that this method is feasible for integrating modern network approaches such as CNNs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Kolmogorov-Arnold representation Theorem", "Kolmogorov-Arnold Network", "Multi-Layer Perceptrons" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7d6f9894896cebea42c320cce2d79f7174c06ec5.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b403efd039b2bac6ed4967c70a81be67ce993dde.zip" }, "title": { "value": "KAAN: Kolmogorov-Arnold Activation Network --- a Flexible Activation Enhanced KAN" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3VxEGpamLT
JAMUN: Transferable Molecular Conformational Ensemble Generation with Walk-Jump Sampling
main
Withdraw
transferable;conformation;ensembles;3D structure;equivariance;sampling;proteins
applications to physical sciences (physics, chemistry, biology, etc.)
Ameya Daigavane;Bodhi P. Vani;Joseph Kleinhenz;Joshua A Rackers
~Ameya_Daigavane1;~Bodhi_P._Vani1;~Joseph_Kleinhenz1;~Joshua_A_Rackers1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": { "value": "We present JAMUN, a new model for generating conformational ensembles for small proteins at rapid rates." }, "_bibtex": { "value": "@misc{\ndaigavane2024jamun,\ntitle={{JAMUN}: Transferable Molecular Conformational Ensemble Generation with Walk-Jump Sampling},\nauthor={Ameya Daigavane and Bodhi P. Vani and Joseph Kleinhenz and Joshua A Rackers},\nyear={2024},\nurl={https://openreview.net/forum?id=3VxEGpamLT}\n}" }, "abstract": { "value": "Conformational ensembles of protein structures are immensely important to understanding protein function. Current techniques for sampling ensembles are computationally inefficient, or do not transfer to systems outside their training data. We present walk-Jump Accelerated Molecular ensembles with Universal Noise (JAMUN), a step towards the goal of efficiently sampling the Boltzmann distribution of arbitrary proteins. By extending Walk-Jump Sampling to point clouds, JAMUN enables ensemble generation at orders of magnitude faster rates than traditional molecular dynamics or state-of-the-art generators. Further, JAMUN is able to predict the stable basins of small peptides that were not seen during training." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Ameya_Daigavane1", "~Bodhi_P._Vani1", "~Joseph_Kleinhenz1", "~Joshua_A_Rackers1" ] }, "authors": { "value": [ "Ameya Daigavane", "Bodhi P. Vani", "Joseph Kleinhenz", "Joshua A Rackers" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "transferable", "conformation", "ensembles", "3D structure", "equivariance", "sampling", "proteins" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "daigavane|jamun_transferable_molecular_conformational_ensemble_generation_with_walkjump_sampling" }, "pdf": { "value": "/pdf/0bd60461563017f1420e9ea607ff69b87ef6c122.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "JAMUN: Transferable Molecular Conformational Ensemble Generation with Walk-Jump Sampling" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3WqfSoxLIh
FDTDNet: Privacy-Preserving Lensless Object Segmentation via Feature Demultiplexing and Task Decoupling
main
Active
Lensless Object Segmentation; Lensless Imaging; Privacy-Preserving; Feature Demultiplexing; Task Decoupling
applications to computer vision, audio, language, and other modalities
6;6;6;6
4;3;2;4
3;3;4;3
3;3;3;3
3;3;2;3
6
3.25
3.25
3
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the points brought up in the weakness section above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is generally well motivated (except for the question of privacy below) and written. It makes sense that a single unified approach would work better than segmenting reconstructed images.\n- The OFD approach is novel and interesting. It has the potential to be useful beyond the segmentation task as a general way of processing lensless measurements for vision tasks.\n- The experiments and ablations are extensive and largely convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a model for segmentation that operates on measurements from a lensless camera. Instead of prior approaches that first attempt to reconstruct an RGB image and then carry out segmentation, the paper's approach directly operates on the lensless measurements. The architecture is endowed with knowledge of the optical measurement process through \"optical feature demultiplexing\", along with other innovations. Experimental results confirm the benefits of this approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper adds an unnecessary \"privacy preserving\" claim (in its title!) that is really only discussed in the (first paragraph of the) introduction, and mostly by citing other papers. Privacy preserving is a strong claim and should not be made without more care. If anything, a paper that shows improved performance at segmentation implies that lensless measurements carry a fair amount of information about the underlying scene, and could leak private details. A video of segmentation masks could, for example, be enough to identify people by gaits. At that point, we get to deciding what privacy preserving means and what kind of privacy is being preserved.\n\n But this entire question is un-necessary to the central contribution of the paper --- a better segmentation approach for lensless cameras. The paper would be stronger, and in my opinion more sound, if it dropped the superfluous privacy claim from its title.\n\n- The ODM + CDM approach could be explained a bit better, and especially discussed more with related work. Has this division into subtasks been tried before? How does this relate to CDMNet?\n\n- Minor point, but the paper should make the experimental results section a bit more self contained and describe the content of the two benchmark datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Can the authors provide further insights into how the method might generalize to more complex datasets, particularly in scenarios where small objects or highly cluttered backgrounds are present? \n2) How does the proposed FDTDNet handle noise in real-world lensless measurements? Could additional noise abatement strategies enhance the robustness of the segmentation?\n3) Could the authors expand on the potential for adapting the method to edge devices, considering the computational demands highlighted in the complexity analysis?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1) Quality: All figures and tables are well-designed and of high quality, except Figure 2, which will be discussed in the weaknesses section below.\n2) Performance: Experiments across two different datasets validate the method’s performance. this proposed approach consistently outperforms competing methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents FDTDNet, a framework for object segmentation using lensless cameras, designed to enhance privacy by bypassing visual image reconstruction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper has two main issues:\n\n1) Clarity:\n- The equations are overly complex. The mathematical presentation, particularly in the OFD mechanism on page 3, lines 162-215 (Equations 3-11), is overly dense and challenging to understand.\n- This section mainly stacks equations without sufficient explanation, making it difficult for readers to grasp the underlying principles. It would be beneficial to include more intuitive or conceptual explanations alongside these equations. \n- Additionally, labeling elements of Figure 2 to indicate which parts correspond to specific equations could greatly improve clarity. Given the length and complexity of this section, I suggest either simplifying the equations or providing clearer explanations.\n\n2) Analysis: \n- The paper could benefit from a more in-depth discussion of its limitations. \n- Although some failure cases are illustrated in Figure 12 on page 16 (Appendix), it would be helpful to place these directly in the main text and discuss potential solutions more explicitly. \n- Discussion about addressing these limitations directly within the main body is suggested.\n- However, this is a minor suggestion. My main concern is the first point about clarity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In OFD, one downsampling and two CBRs are used to transform $A_L $ or $A_R$ into its semantic space, and one PVT is used to transform the measurement Y into its semantic space. Why not do the same for AL/AR and Y? What is the author's consideration?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The originality is supported by modelling the linear equation between the semantic features bound to lensless measurements and those corresponding to visual inputs, and application of multiple current machine learning methods to a new domain, i.e., lensless object segmentation. The quality, clarity and significance of this work is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To enhance segmentation accuracy while ensuring privacy, the authors propose a one-step method called FDTDNet for lensless object segmentation from lensless measurements without visual reconstruction. They propose an optical-aware feature demultiplexing (OFD) mechanism aimed at refining the features obtained from lensless measurements via modeling the linear equation between the semantic features bound to lensless measurements and those corresponding to visual inputs. They decouple the segmentation task into a contour distribution map (CDM) and a body distribution map (BDM) inference by contour-/bodydistribution learning branches, and propose a contour-body interaction (CBI) module for reasoning segmentation results from correlations between CDM and BDM. They conducted extensive experiments to verify their methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Equation 1 is the basis for their modeling and derivation of the relationship between the original image and the measurement in the feature space. However, Equation 1 itself is not convincing. That is, does the linearity between the original image and the measurement mean that the semantic features of the original image and the measurement are also linear? The authors should have a more rigorous derivation or proof for this." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The performance of the network is not analyzed, such as the number of parameters, number of floating-point operations, inference time, etc.\n2. Lack of explanation and verification of the weight setting of the hybrid loss function.\n3. The paper does not explain the advantages of this one-step segmentation over the prior visual reconstruction method, and the experiment does not compare it with another method.\n4. There is a lack of a more detailed description of the datasets. According to my understanding, are these datasets all synthetic? Are the measurements of the images synthesized using prior knowledge?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Introduces an Optical-Aware Feature Demultiplexing mechanism that enhances feature extraction from lensless measurements.\n2. Effectively decouples segmentation into contour and body tasks, leveraging a mutual learning strategy.\n3. Demonstrates superior performance on two datasets, outperforming state-of-the-art methods in multiple metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a one-step method without intermediate image reconstruction, addressing privacy concerns and computational efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The performance of the network is not analyzed, such as the number of parameters, number of floating-point operations, inference time, etc.\n2. Lack of explanation and verification of the weight setting of the hybrid loss function.\n3. The paper does not explain the advantages of this one-step segmentation over the prior visual reconstruction method, and the experiment does not compare it with another method.\n4. There is a lack of a more detailed description of the datasets. According to my understanding, are these datasets all synthetic? Are the measurements of the images synthesized using prior knowledge?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024fdtdnet,\ntitle={{FDTDN}et: Privacy-Preserving Lensless Object Segmentation via Feature Demultiplexing and Task Decoupling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3WqfSoxLIh},\nnote={under review}\n}" }, "abstract": { "value": "Camera-based vision systems pose privacy risks, whereas lensless cameras present a viable alternative by omitting visual semantics from their measurements due to the absence of lenses. However, these captured lensless measurements pose challenges for existing computer vision tasks such as object segmentation that usually require visual input. To address this problem, we propose a lensless object segmentation network via feature demultiplexing and task decoupling (FDTDNet) to perform object segmentation for lensless measurements. Specifically, we propose an optical-aware feature demultiplexing mechanism to get meaningful features from lensless measurements without visual reconstruction and design a multi-task learning framework decoupling the lensless object segmentation task into two subtasks, i.e., the reason for contour distribution maps (CDM) and body distribution maps (BDM), respectively. Extensive experiments demonstrate that our FDTDNet achieves highly accurate segmentation effect, which sheds light on privacy-preserving high-level vision with compact lensless cameras." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Lensless Object Segmentation; Lensless Imaging; Privacy-Preserving; Feature Demultiplexing; Task Decoupling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f6e3951ee450ae396e58513a1f3a06ca1926fb48.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "FDTDNet: Privacy-Preserving Lensless Object Segmentation via Feature Demultiplexing and Task Decoupling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Wuvqc4xoy
Learning Efficient Representations of Neutrino Telescope Events
main
Active
neutrino;neutrino telescope;representation;learning
applications to physical sciences (physics, chemistry, biology, etc.)
1;3;3;8
4;4;4;4
2;2;2;4
1;2;2;4
1;2;1;4
3.75
4
2.5
2.25
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* In Fig. 2, the “autoencoder” outputs some probabilities through the softmax activation. This is a confusing design. How is the reconstruction loss applied in this case? \n* In section 4.2, the training methodology for the three models and the utilization of om2vec are unclear. Can you provide a more detailed explanation of the training process and how om2vec is incorporated?\n* Are there any additional physics features that could be included in the time series data, beyond the current single feature of photon hits?\n* In lines 179-180, the authors wrote “We opted for a learnable memory embedding for the transformer decoder layers, ensuring that the decoder portion of the architecture remains entirely independent of the encoder”. Please elaborate on the memory embedding block about its design.\n* The model and training details in Table 1 are incomplete and unclear. Can you provide a more comprehensive description of the model architecture, including the number of encoder and decoder layers used?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The application of machine learning techniques in scientific research is a vital and rapidly evolving field. We are delighted to see submissions in this area and encourage researchers to share their relevant work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article presents an approach to learning representations of neutrino events by leveraging a transformer-based variational autoencoder. The model is trained to capture the photon arrival time distribution, and the learned representations are evaluated using the Jensen-Shannon divergence to assess reconstruction quality. Furthermore, the authors explore the applicability of these representations in a downstream task – angular reconstruction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This article requires significant improvements in its writing and technical accuracy. Numerous technical details are either unclear, incorrect, or require further clarification (see Questions for specific concerns). As it stands, the article's technical clarity is compromised, which may lead to confusion and misinterpretation. A thorough revision is necessary to ensure the article's technical details are accurate, clear, and concise." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How does the model’s performance vary with different encoder/decoder block architectures or deeper networks?\n2. Can the approach be adapted or extended to handle data from other types of particle physics experiments with different signal characteristics?\n3. Have real-world data tests been considered, and if so, what were the challenges and results?\n4. Is there potential for this method to contribute to real-time data processing in neutrino observatories under field conditions?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Originality: Applying transformer-based VAEs to neutrino event data is novel and demonstrates a creative extension of ML techniques to physical sciences.\n- Quality: Comprehensive evaluation of the model against AGMMs, showing significant improvements in reconstruction accuracy, computational efficiency, and robustness.\n- Clarity: The architectural details, data processing steps, and experimental methods are described with clarity, making the paper accessible to readers familiar with ML and neutrino physics.\n- Significance: The ability to improve data processing and enable better downstream analyses has substantial implications for neutrino research and potentially for other high-dimensional physics datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents om2vec, a novel approach leveraging transformer-based variational autoencoders (VAEs) to create compact, descriptive latent representations of photon arrival time distributions (PATDs) from neutrino telescope events. The proposed model is designed to handle the high-dimensional, variable-length data typical of neutrino observatories like IceCube. om2vec aims to outperform conventional approaches, such as asymmetric Gaussian mixture models (AGMMs), by improving reconstruction accuracy, runtime efficiency, and reliability while being less dependent on hyperparameters. The paper details the architecture, training, and testing with simulated datasets, comparing the method’s performance with traditional AGMMs and exploring its utility for downstream tasks like angular reconstruction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Generalizability: While the results are promising, it would be helpful to see a more extensive discussion on how the method might generalize across different types of neutrino observatories or non-simulated real-world data.\n- Comparison Baseline: Although om2vec is compared with AGMMs, additional comparisons with other potential ML approaches (e.g., deep CNNs or LSTMs) for PATD representation might strengthen the case for its use.\n- Hyperparameter Sensitivity: While the model claims reduced dependence on hyperparameters, an exploration of performance variability with different encoder/decoder block configurations or latent dimension sizes would provide deeper insights into its stability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Not at the moment, will see other reviewers' comments." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The application is certainly interesting and compelling. I also like the rationale of the work. There's a clear scientific motivation for these problems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This develops a variational autoencoder to create a generative model for data produced by neutrino telescopes. The architecture is based on transformers, and results in a flexible representation and improved computation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Several aspects. First, this is an ML focused conference so I would have appreciated greater details on the encoder and decoder without having to dig through the source code. Why transformers as opposed to a simpler architecture? Is there some kind transformation of the features that would allow for an MLP. Even if not, I would appreciate these as baselines as opposed to a traditional statistical model when comparing performance.\n\nAlso having worked with these a lot, I'm willing to bet that there was a substantial amount of tweaking required for learning rate and architecture parameters. If not, I'm certain performance can be improved dramatically by taking these steps. Another example, the runtime isn't really compelling to me. This is a feed-forward network, clearly it's going to be quicker than the alternatives. Should be supplementary, which would make more space for the fitting details I discussed.\n\nOverall, this seems written for a scientific audience rather than an ML audience. I very much appreciate the application and clear motivation so I hope it's resubmitted. It just seems like some of the details we find interesting were glossed over and need to be improved for this to be accepted." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "The paper is somewhat limited as it presents results solely based on training and testing with simulated events, which may not accurately reflect real-world measurement data. Given that the approach uses a VAE-based transformer, it may perform better with simulated data that follows known distributions. Do you have access to any existing real-world datasets? If so, I would appreciate your feedback on this aspect." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The use of a transformer-based variational autoencoder (VAE), called om2vec, represents an innovative approach for neutrino event data analysis, which has traditionally relied on more conventional statistical methods or simple summary statistics. \n- The paper pushes the boundaries of machine learning applications within high-energy physics, specifically neutrino detection. \n- By applying a VAE with transformer components to a unique scientific data source, the paper contributes to bridging techniques between disciplines, such as physics, machine learning, and data science. This could encourage further cross-disciplinary research and adaptation of machine learning models to complex scientific problems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper titled \"Learning Efficient Representations of Neutrino Telescope Events\" introduces a novel approach called om2vec, which utilizes transformer-based variational autoencoders (VAEs) to effectively represent neutrino telescope events. The study addresses the challenges posed by high-dimensional, variable-length Photon arrival time distributions (PATDs) recorded by optical modules in neutrino telescopes, particularly focusing on the IceCube Neutrino Observatory." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper lacks a clear structure and does not adequately address related work. If this is indeed the first study applying deep learning techniques to the domain of neutrino telescopes, it is essential to include a dedicated **Related Works** section to provide context for this research.\n\n- The figures in the paper are oversized. I recommend the authors resize them to a more standard dimension to enhance the overall presentation quality. The current size does not meet the standards expected for conference presentations.\n\n- There are several typographical errors throughout the paper (e.g., lines 127, 484, etc.), which detract from its readability and should be addressed to improve clarity.\n\n- The objective function is unclear, and the problem is not well-defined. The paper jumps directly to the results, with only a brief discussion of the classical $KL$ divergence. A significant improvement is needed in presenting a comprehensive **Proposed Methods** section that clearly defines the final objective function, rather than merely referring to it in the **Results section** (lines 228 to 230).\n\n- Some statements in the paper are ambiguous or inaccurate. For example, the assertion in lines 223 to 232 that \"the re-parameterization trick is utilized to construct the latent representation $z$, a vector of user-defined length referred to as the latent dimension. This technique guarantees that the latent space remains continuous and that similar representations within this space reconstruct to similar PATDs\" is misleading and not entirely accurate. However, the reparameterization trick separates the randomness of sampling (handled by $\\epsilon$) from the parameters $\\mu$ and $\\sigma$, which allows to compute gradients with respect to these parameters. I recommend that the authors deepen their understanding of this concept from this paper [1].\n\nI would be willing to consider increasing my rating, but only if these issues are adequately addressed. As it stands, the current version of the paper is not ready for publication.\n\n**Refrences:**\n\n[1] Kingma, Diederik P., and Max Welling. \"An introduction to variational autoencoders.\" Foundations and Trends® in Machine Learning 12.4 (2019): 307-392" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Using variational autoencoders to learn representations of neutrino telescope events" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Efficient Representations of Neutrino Telescope Events},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Wuvqc4xoy},\nnote={under review}\n}" }, "abstract": { "value": "Neutrino telescopes detect rare interactions of particles produced in some of the most extreme environments in the Universe. This is accomplished by instrumenting a cubic-kilometer volume of naturally occurring transparent medium with light sensors. Given their substantial size and the high frequency of background interactions, these telescopes amass an enormous quantity of large variance, high-dimensional data. These attributes create substantial challenges for analyzing and reconstructing interactions, particularly when utilizing machine learning (ML) techniques. In this paper, we present a novel approach, called om2vec, that employs transformer-based variational autoencoders to efficiently represent neutrino telescope events by learning compact and descriptive latent representations. We demonstrate that these latent representations offer enhanced flexibility and improved computational efficiency, thereby facilitating downstream tasks in data analysis." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "neutrino", "neutrino telescope", "representation", "learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/280034c97dfba8649d53c794ed500bafbaa99fbd.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e96cd606d9331eef1f983b3f7b1344b1ac7c681b.zip" }, "title": { "value": "Learning Efficient Representations of Neutrino Telescope Events" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3X3LuwzZrl
Multi-Label Node Classification with Label Influence Propagation
main
Active
graph neural networks;multi-label;node classification
learning on graphs and other geometries & topologies
5;5;6;8
3;4;3;4
3;2;3;3
3;2;3;3
1;3;3;3
6
3.5
2.75
2.75
2.5
0.408248
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What are the limitations of decomposing message passing into propagation and transformation operations? Are there cases where this decomposition might not hold?\n\n2. How sensitive is the method to the initial construction of the label influence graph?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces a new way to analyze label relationships by examining their mutual influences rather than just correlations, supported by empirical observations shown in Figure 1.\n\n2. The work provides a theoretical analysis of how label influences emerge during both propagation and transformation operations in graph neural networks.\n\n3. The proposed LIP framework is plug-and-play compatible with various GNN architectures and shows consistent performance improvements across different datasets and settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Label Influence Propagation (LIP), a novel approach for multi-label node classification (MLNC) on graphs. The key innovation is analyzing and leveraging the mutual influences between different labels, rather than just label correlations. The authors decompose the message passing process into propagation and transformation operations to quantify label influences, construct a label influence graph, and dynamically adjust the learning process based on positive and negative label interactions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper doesn't thoroughly discuss how the method scales with increasing numbers of labels or larger graphs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the above weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Pros:\n1. The paper considers the positive and negative influence of different labels and encourages or suppresses labels that bring positive or negative influences, respectively.\n2. The proposed model is a plug-and-play approach, which can be applied to various GNN backbones. \n3. The paper offers a label correlation analysis by dissecting the pipeline into a forward and backward propagation segment." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a label influence propagation framework for the multi-label node classification task. Specifically, the paper constructs a label influence graph based on the integrated label correlations. Then, the paper propagates high-order influences through this graph and dynamically adjusts the learning process by amplifying labels with positive contributions and mitigating those with negative influence." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Cons:\n1. What is the difference between the label propagation methods? (GMNN: Graph Markov Neural Networks, Combining Graph Convolutional Neural Networks and Label Propagation, Resurrecting Label Propagation for Graphs with Heterophily and Label Noise). The paper should cite and compare with them, and highlight the improvement of the model.\n2. Some hyperparameters are important to the model. It is better to give some hyperparameter analysis about the model, such as \\alpha, \\beta. It is suggested to plot the results showing how performance varies with these parameters and report the chosen values.\n3. How does the number of label categories k affect the model? It is recommended to study the effect of the performance.\n4. It is highly recommended to give some examples, i.e., the visualization of the positive and negative influence of different labels in the case studies, showing how certain labels positively or negatively influence others and how this affects the model's predictions.\n5. It is recommended to give other backbones, such as the most commonly used GIN, to show the effectiveness of the model.\n6. It is better to check the label of the axis in the figure. i.e., fig 4c. The label of the x-axis is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My concerns are mainly from two parts: discussions of relation work and designs of experiments.\n\n1. Actually, decoupled GNNs\\[1,2,3,4,5] have been studied in past a few years. Although the authors are inspired by the theoretical analysis to decouple the propagation module and feature transformation, the previous efforts in decoupled GNNs should be discussed. I have noticed the authors cite APPNP \\[1], one of the representative decoupled GNNs, in Line 263. Here, I suggest the authors open a new subsection in Related Work to comprehensively review recent decoupled GNNs.\n2. As a plug-and-play, I suggest the authors try more advanced GNNs as backbones for ablation study, such as advanced decoupled GNNs\\[1,3].\n3. Based on Figure 4(b), I suggest the authors conduct ablation on all other datasets to comprehensively validate the contribution of each module.\n4. The author claim the efficiency of the proposed method via the time complexity analysis. Maybe the authors can report the computational cost of each methods, including training time cost and GPU memory cost, to strength this contribution.\n\n\n\n[1] Predict then propagate: Graph neural networks meet personalized pagerank, ICLR 2019\n\n[2] On the equivalence of decoupled graph convolution network and label propagation, WWW 2021\n\n[3] Adaptive universal generalized pagerank graph neural network, ICLR 2021\n\n[4] Towards deeper graph neural networks, KDD 2020\n\n[5] Neighborhood Convolutional Graph Neural Network, KBS 2024" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is well-written and easy to follow.\n2. The authors provide the theoretical guarantee for the proposed method.\n3. LIP shows promising performance on various datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper develops a new method LIP that leverages the propagation strategy to obtain high-order label information in the graph for multi-label node classification . The authors provide the theoretical analysis for the motivation and the proposed method. The extensive experimental results show the effectiveness of LIP on multi-label node classification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The review of related work is not comprehensive.\n2. The ablation study is inadequate.\n3. The efficiency study is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "**Q1.** Could you provide additional explanation of the performance enhancement on heterophilous graphs? It's interesting that LIP consistently enhances performance on these datasets, despite the method appearing to be built upon a homophily assumption.\n\n**Q2.** How does the performance change when varying the $\\beta$ in eq. 12?\n\n**Q3.** Although the authors state that \"multi-label classification on graph data is inherently transductive\" in the Appendix, the inductive setting with partially accessible graph structure is more realistic in many real-world applications. While benchmark datasets are commonly used in a transductive manner, it would be straightforward to modify these datasets for the inductive setting by masking nodes and their corresponding edges in different splits. The authors should consider evaluating LIP under such conditions to verify the practical relevance." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper is well-motivated, emphasizing the importance of multi-label node classification in various domains. The challenge of label correlations and their potential positive and negative influences in non-Euclidean data is clearly explained.\n- The idea of constructing a label graph is interesting.\n- Experimental results demonstrate that LIP achieves notable performance gains across datasets of diverse domains, regardless of the backbone GNNs, highlighting its versatility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the Label Influence Propagation (LIP) method for multi-label node classification on graph-structured data. The main idea is to model both positive and negative influences between labels by separating the message-passing process into distinct propagation and transformation operations. Additionally, LIP constructs a label influence graph to quantify label-wise importance scores." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**W1.** The writing needs more thorough proofreading. I noticed several grammatical issues, which detract from the overall quality of the paper.\n\nExamples include:\n- \"Illustrate\" should be changed to \"Illustration of\" in Fig. 3 caption.\n- \"contain\" should be \"containing\" in line 300.\n- \"analysis\" should be \"analyze\" in line 313.\n- \"which can be change\" should be \"which can be changed\" in line 320.\n\nAdditionally, table captions should be placed *above* the tables.\n\nSeveral critical errors related to definitions also needs to be revised:\n- \"negative influence\" should be \"positive influence\" in line 301.\n- The eq. 6 seems inconsistent with the textual explanation of positive influence.\n- $\\Omega_j$ may need to be revised to $\\psi_j$ for consistency.\n\n**W2.** The clarity of the paper needs to be improved. For instance:\n- The theoretical justification in Section 4.2 and Appendix A needs more clarity. While the authors assert that the graph structure is a key driver of label influence during propagation, they do not fully clarify how the feature influence $I(i,j)$ and PPR are connected to the proposed label influence metric in the propagation operation. I can infer that positive and negative influences in PPR and feature influence metrics correspond to Equations 6 and 5, respectively, but this connection should be made explicit.\n- Additionally, the augmented form of $\\text{\\textbf A}$ is not clearly defined. Is it a multi-hop adjacency matrix?\n\n**W3.** What are the limitations of the proposed method? The authors didn't include a discussion on the potential limitations of the proposed method.\n\nIf the above concerns and subsequent questions are addressed, I'm willing to raise my score." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multilabel,\ntitle={Multi-Label Node Classification with Label Influence Propagation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3X3LuwzZrl},\nnote={under review}\n}" }, "abstract": { "value": "Graphs are a complex and versatile data structure used across various domains, with possibly multi-label nodes playing a particularly crucial role. \nExamples include proteins in PPI networks with multiple functions and users in social or e-commerce networks exhibiting diverse interests. \nTackling multi-label node classification (MLNC) on graphs has led to the development of various approaches. Some methods leverage graph neural networks (GNNs) to exploit label co-occurrence correlations, while others incorporate label embeddings to capture label proximity. However, these approaches fail to account for the intricate influences between labels in non-Euclidean graph data.\nTo address this issue, we decompose the message passing process in GNNs into two operations: propagation and transformation. \nWe then conduct a comprehensive analysis and quantification of the influence correlations between labels in each operation. \nBuilding on these insights, we propose a novel model, Label Influence Propagation (LIP). \nSpecifically, we construct a label influence graph based on the integrated label correlations. \nThen, we propagate high-order influences through this graph, dynamically adjusting the learning process by amplifying labels with positive contributions and mitigating those with negative influence.\nFinally, our framework is evaluated on comprehensive benchmark datasets, consistently outperforming SOTA methods across various settings, demonstrating its effectiveness on MLNC tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "graph neural networks", "multi-label", "node classification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fda28e6b12e15611f21287ba1494365ccb1a99f2.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multi-Label Node Classification with Label Influence Propagation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3X6QlkWfHH
Covariate-informed continuous-time gray-box modeling to identify responsiveness of post-surgical pain to opioid therapy
main
Active
state space model;gray box;hybrid model;time series;treatment effects
learning on time series and dynamical systems
3;3;5;5
4;2;3;4
2;2;3;3
1;2;2;3
2;4;2;3
4
3.25
2.5
2
2.75
0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Why not provide baselines and ablations of your method to defend the design choices and that the added complexity is necessary for improved predictions of $a$? Why wouldn't a simple autoregressive model work pretty well, while maintaining a similar level of interpretability?\n2. To improve the impact of this work, please address how would $a$ be used by a clinician, what kind of decision-making can this enable, and demonstrate the efficacy of potential interventions. Can you demonstrate either a case study or an improved causal treatment effect based on some interventions your method allows?\n3. Can you remove the restriction of a full 24-hour trajectory and find the limits of your method in terms of how much time it really needs to get an accurate measurement of $a$, and discuss how that could affect clinician decision-making abilities. Specifically, can you get an estimate of $a$ in a short enough time to allow clinicians to make meaningful interventions on choice of opioid.\n4. It seems that the result that Opioid responsiveness is associated with better overall outcomes is confounded by the fact that those patients are just sicker. Can latent sickness be included in the model to remove this confounding of severity of the disease, from the patient's responsiveness to opioids." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The model follows Clinical Intuition\nThe authors the main clinical takeaway that opioid responsiveness is associated with better overall outcomes, and demonstrate this is the case with several pain and risk outcomes. Additionally, the model successfully learns known relative potencies between different opioids (fentanyl vs hydromorphone vs oxycodone) from the clinical literature, which helps validate their approach.\n\nAdditionally, mechanistic models are interpretable and practical for a clinician understanding and aiding their decision-making.\n\n\n2. The model leverages known latent dynamics:\nThe paper leverages a pharmacology model to estimate opioid concentrations in the patient over time $u(t)$. This domain-specific bias could give this method a huge edge over black box models (especially on this limited dataset size) however the authors do not compare to any black box baselines, or try ablating this from the model and just using raw dosage information.\n\nThe model additionally provides an interpretable latent pain score, that ideally is more objective than reported pain which is very noisy.\n\n3. Validation\nThe authors perform a simulated study demonstrates they can recover the opioid responsiveness for synthetic patients given a 24-hour trajectory. Additionally, they demonstrate the model learns relative potencies known in the medical literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper defines a mechanistic Bayesian model that can infer a posterior over patient specific (although patient specific in an incredibly limited sense) opioid responsiveness to opioid treatments from observations of reported pain." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## Key Weaknesses\n\n1. Impact of this application is currently weak\nThis is mainly because the paper requires a full 24-hour trajectory and provide retrospective insights rather than actionable predictions:\n\na. The method can only determine the opioid responsiveness **after** seeing the full 24-hour trajectory. I think there should be analysis of the potential impacts and interventions this could enable physicians to make real time decisions based on the inferred patient opioid responsiveness, at different time scales (say 1 hour, 4 hours, 8 hours, etc.)\n\nb. Additionally, patient trajectories are inferred independently. The only cross patient learning is limited to the \"covariate-informed prior\", a neural net which only take static patient information as input (demographics and a small set of patient clinical characteristics). I imagine there is useful initial trajectory information that is not captured in that limited prior data. More specifically, can we take the first hour or so of the patient's observed dynamics and leverage this for a more informative prior in the model? There must be characteristics of initial responses to treatment that are shared across patients too, and this approach would enable this.\n\nIsn't the result that opioid responsiveness is associated with better overall outcomes confounded by the fact that acutely sicker patients (and chronically sick patients) have more pain and may be flagged as having low responsiveness by your method. A patient may have high pain regardless of the amount of opioids and responsiveness if their physical state is constantly deteriorating. The discussion of these findings and confounders should be significantly deeper.\n\nIf a patient is correctly identified as a low responder, what do we do? Alternative remedies are not proposed or analyzed for aiding clinical decision-making.\n\n2. Weak Evaluation -- No baselines or ablations\n\nThe paper's proposed model needs a comparison against simpler approaches to justify (a) its added complexity and (b) the additional computational cost. I don't know what an appropriate baseline is, but I would expect there are simple autoregressive baselines such as:\n```\nFor each patient:\n1. Take hourly bins of:\n - Average pain score (carry forward and backward impute these pain scores)\n - Total opioid concentration from u(t) and/or the raw opiod dosage in that hour\n2. Fit simple linear model:\n pain(t) = β0 + β1*pain(t-1) + β2*drug_concentration(t) + ε\n3. Use β2 as estimate of opioid responsiveness $a$\n```\n\n\nYou should compare to standard medical assessments of opioid tolerance or physician's labeled estimate to validate your model as well, or demonstrate the specific failures of these standards that your model overcomes.\n\nThe authors do not ablate any parts of the model to demonstrate that they are indeed necessary for predicting $a$.\n\n\n## Presentation issues\n\nDefine $\\mathcal{L}$ before it is first used. I think you use it as probability, right, why not use $P$?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My most important questions are already listed above under 'weaknesses'.\n\nIn addition, a few minor things:\n- What are the actual models $f$ and $g$ used in the case study?\n- How is all of this implemented? What are the key packages used for modeling / inference?\n- What is the exact motivation for this work? The first paragraph concludes by stating that \"Tools to identify patients for whom opioids may be less effective for pain relief and that may have greater risk for dependence are greatly needed.\" What clinical benefit exactly would such tools enable? In other words, how could the insights derived from such a model be turned into improved medical care? \n- The authors write that \"Typically, expectation-maximization procedures are used to fit state space models.\" While possibly true (that EM is 'typically' used), this seems a bit reductive to me? A modern approach might be e.g. to use current autodiff packages and implement maximum likelihood estimation via SGD, see e.g. https://github.com/probml/dynamax ? Särkkä and Svensson, Bayesian Filtering and Smoothing, might be interesting for the authors if they don't know it already (which I assume they do).\n- (Black-box) Variational Inference could be listed (and discussed) as a potential alternative to the MCMC approach pursued here\n- I found the example presented in Fig.1 to be a bit perplexing, since it actually looks as if opioid effect site concentration does not have a meaningful effect on pain scores at all? Or am I misinterpreting something here? Pain rises and then drops again around 400 min without opioid administration, and then oxycodone is administered twice with no apparent effect at all? What is the reader to take from this figure?\n- The acronym 'PACU' is never spelled out / defined. I presume the same holds for several other acronyms." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- An interesting, non-standard clinical application problem, a custom solution well-tailored to this particular problem, and a validation on real-world clinical data for this particular problem\n- A novel (to me, at least), interesting, uncertainty-aware and quite general way of combining black-box ML with traditional models based on prior domain knowledge (in this case, on pharmacodynamics and -kinetics)\n- The paper is generally very well-written and nicely readable; the presented (quite complex) modeling approach is presented well; math is presented thoroughly and precisely\n- A thorough approach for MCMC and EM-based inference in the proposed model\n- Great related works section, providing a concise yet very helpful overview of various (very different) strands of related literature" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a general approach to combine black-box ML with gray-box modeling components informed by prior domain knowledge. They apply this to a challenging (non-standard) clinical problem, namely the identification of post-operative patient responsiveness to pain medication. To this end, they combine a continuous-time dynamical state-space model of the patient's latent pain state with a black-box ML model component that adjusts each patient's prior on medication responsiveness based on available covariates. MCMC using a custom proposal function and expectation maximization are used for inference. The approach is validated using a simple simulation study before being applied to a large-scale real-world dataset. The identification results appear to loosely correlate with prior results known from the medical literature." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I am not yet convinced that the identification was actually successful, yields meaningful results, and is useful in any real way. Two specific points in case:\n - Did the MCMC procedure actually converge? No standard MCMC details or diagnostics are provided. How many chains were used? How many steps? Did they all converge to the same distribution? What are the effective sample sizes (ESS) and $\\hat{r}$? What do the trace plots and ACFs look like? Like most inference procedures, MCMC always yields *some* result but it is rarely trustworthy. As a point in case, the responsiveness posteriors in Fig. 3 look like they are strongly dominated by some relatively uninformative prior; they are very much *not* concentrated around a specific parameter value (and characterizing them by the median does not seem to make a lot of sense to me).\n - The identified opioid responsiveness correlates with clinical outcomes (table 3). However, is this correlation actually any better than a much more naive approach such as simply categorizing patients based on the mean reported pain in the first 24h after surgery, ASA status, procedural severity, age, etc.? Do we *gain* anything from using this quite complex approach? In any case, this is only *very* circumstantial evidence that the identified parameters indeed bear any meaningful relationship to real patient properties. (Also, the distributions in Fig. 4 are really not bimodal at all, hence a categorization into high/low groups makes little sense. It would seem much more meaningful to assess *correlations* between the identified parameters and the relevant outcomes instead.)\n\n2. I am not (yet) entirely convinced by the choice of the dynamical model. Eq. (2) suggests that drug administration pushes down the latent pain state, even far into negative territory and even if pain is already suppressed to zero. This seems to neglect e.g. saturation effects and might lead to the latent pain state taking an unrealistically long time to 'recover' back to normal. (I would rather have expected a multi-state model, e.g. with separate latent pain and opioid effect states and pain observations representing the difference of the two.) It also seems unlikely that the state transition noise is actually white (Wiener) - e.g. after stopping opioid administration, the pain state will likely continually increase, no? So I would expect the process noise to be (auto-)correlated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.This paper addresses a interesting and meaningful topic in the AI for healthcare field, focusing on quantifying patient responsiveness to opioid therapy, which is crucial for reducing risks associated with opioid use. Additionally, the authors introduce an novel approach by employing continuous-time state-space modeling, which captures the dynamic nature of pain and drug effects more effectively.\n\n2. The authors provided R code for the model and simulation study, which is helpful for reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a continuous-time state-space model for quantifying patient responsiveness to opioid therapy by using pain scores, PK/PD models, and patient covariates. The model uses Bayesian inference and MCMC methods to estimate latent pain states and individual opioid response parameters. A simulation study and real-world data from over 21,000 surgical cases were used to validate the effectiveness of model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "[Presentation] The introduction does not clearly outline the basic modeling of the responsiveness of postsurgical pain and the limitations of existing methods, which makes it harder for readers to fully grasp the motivation behind the study. For example, the introduction only discusses the importance and challenges of personalized opioid responsiveness but does not mention any existing work or their limitations. If there are existing or similar studies, please add them to the introduction to provide context and show how your approach differs. The \"outcomes\" column in the Table 3 are not well-explained, making it difficult for readers to follow\n\n\n[Method] The proposed method assumes covariate-informed priors, which may strongly impact predictive performance and potentially introduce errors. Additionally, opioid ECS u_j values are derived from patient demographics, overlapping with covariates c_j and potentially introducing correlation issues that could affect model reliability. I recommend that the authors perform specific analyses, such as correlation or multicollinearity tests, to assess the relationship between these variables and evaluate their impact on model outcomes to ensure unbiased predictions.\n\n[Method] This paper employs a complex continuous-time state-space model and stochastic differential equations, which may be hard for clinicians to understand and apply in practice. Additionally, I recommend that the authors include a more detailed discussion on how the predicted opioid responsiveness can be effectively translated into intervention strategies or treatment plans, providing clearer guidance on the clinical implications and real-world application.\n\n[Experiment] Although the experimental section demonstrates the effectiveness of the model through simulations and real-world data, it lacks evidence or experimental results to support how the proposed method outperforms existing models [1]\n\n[1] Estimating individual treatment effect: generalization bounds and algorithms. ICML2017" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please address the weakness points that I raised (which are fundamentally about *baselines* and a *more thorough literature review* that better justifies the specific modeling choices made). If some modeling choices could actually be swapped out for something else or changed, giving the reader a sense of how much the results change would be helpful.\n\nMore generally, especially as I find that this paper is more of an applied paper, I think the paper should more thoroughly interpret the results of the model in the context of the actual application. Being very clear about how the proposed method could be used by clinicians/practitioners and how it compares to what they already are currently doing would be helpful." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The specific application (looking at how different opioid drugs impacts perceived pain) is well-motivated.\n- The high-level ideas of the paper are mostly easy to understand/follow (perhaps in part because from a technical standpoint, I find that the proposed method is largely just piecing together fairly standard techniques).\n- The proposed approach looks promising." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a \"gray box\" model (partly straightforward to interpret and partly black box) that quantifies responsiveness of pain to opioid therapy. To demonstrate the effectiveness of the approach, the authors experimented on simulated data as well as a real-world observational study." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I think this paper would really benefit from having baselines, even if the baselines are \"straw man\" baselines, just to give the readers a sense of how well much simpler \"naive\" approaches to solving the problem do. I don't know this specific application well enough to know whether there are any well-known existing baselines or well-known clinical guidelines that would help us better understand how the proposed method compares to what best practices currently are. I understand that the paper shows that the proposed method is able to recover findings consistent with existing literature, but it seems from reading lines 369-375 (page 7) that the method also has some discrepancies with existing literature? More thorough discussion of this discrepancy would be helpful.\n- In the related works section, causal inference based approaches are mentioned (such as the Liu et al (2023b) and Bica et al (2020) references). Could these be used as baselines? I think importantly, even if the causal assumptions are not satisfied, it is still worth trying out these models to get a sense of whether they provide anything useful even at the level of quantifying *association* rather than causation.\n- I think better justifying the different components of the proposed method would be helpful, especially since I get the impression that many different models could have been developed to solve this particular problem. For example, how much do the results change as we change the black box predictors used? Also, maybe I missed it but I didn't understand which specific black box predictors are used. For the continuous state space part, I'm under the impression that a number of authors have worked on methods in this space that could potentially be applied to your setting as well (for example, some older papers here would be the deep Kalman filter paper by Krishnan et al (2015) or the paper on structured variational autoencoders (Johnson et al 2016); more recently there has been an explosion of papers recently on state space modeling using S4/Mamba architectures, and I'm not sure to what extent those could be applied in your setup).\n\nMinor:\n- Some of the math notation is not standard and should be fixed, especially how functions are specified. For example, in the first two paragraphs of Section 3.1, \"$y_j(t_{j_i}): \\mathbb{R}\\rightarrow\\\\{0,1,\\dots,10\\\\}$\" should instead be written as \"$y_j: \\mathbb{R}\\rightarrow\\\\{0,1,\\dots,10\\\\}$\" and \"$\\boldsymbol{u}_j(t):\\mathbb{R}\\rightarrow\\mathbb{R}^m$\" should instead be written as \"$\\boldsymbol{u}_j:\\mathbb{R}\\rightarrow\\mathbb{R}^m$\". Etc. (Note that I can't figure out how to get \"\\mathbf\" to work in OpenReview so I didn't get the bolding to match the text.)\n\nReferences:\n- Krishnan et al. Deep Kalman Filters. NeurIPS Advances in Approximate Bayesian Inference & Black Box Inference (AABI) Workshop 2015.\n- Johnson et al. Composing graphical models with neural networks for structured representations and fast inference. NeurIPS 2016." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We developed a continuous-time state space method combining mechanistic PK/PD with black box prediction to identify surgical patients' responsiveness to opioids, given preoperative covariates, pain scores, and opioid administration over time." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024covariateinformed,\ntitle={Covariate-informed continuous-time gray-box modeling to identify responsiveness of post-surgical pain to opioid therapy},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3X6QlkWfHH},\nnote={under review}\n}" }, "abstract": { "value": "Quantifying responsiveness of pain to opioid administration is a clinically important, yet technically challenging problem.\nPain is a subjective phenomenon that is difficult to assess by means other than infrequent and low-resolution patient self-reporting.\nWe tackle this problem using a continuous-time state space modeling approach that incorporates mechanistic models of opioid effect site concentration as well as information from covariates using black-box models iteratively trained to predict the distributions of partially observed variables.\nWe evaluated our method in simulation, and applied it in a real-world observational study of 21,652 surgical cases, where our method is able to recapitulate the known potencies of different opioids, and stratify patients by pain and opioid use related outcomes." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "state space model", "gray box", "hybrid model", "time series", "treatment effects" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9201c32d811dd1187f52a6790ea984f8061c8898.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9c82b3b61d663f218f0737edefedb0c6067fd545.zip" }, "title": { "value": "Covariate-informed continuous-time gray-box modeling to identify responsiveness of post-surgical pain to opioid therapy" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3XTw909oXt
RAG$^C$: Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models
main
Active
Copyright Protection;Ownership Verification;Retrieval-augmented Generation
alignment, fairness, safety, privacy, and societal considerations
3;3;3;5
4;3;4;3
2;2;2;3
2;2;2;3
3;2;2;2
3.5
3.5
2.25
2.25
2.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.Why was CoT chosen as the approach for protecting the knowledge base? Please clarify the rationale behind this choice.\n2. Equation (4) appears to differ from its textual description and would benefit from further analysis and clarification.\n3. The paper appears to lack experimental evaluation of the proposed method's performance in cases where inputs without watermarks phrase still generate target CoT text." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.This paper is well-structured and well-written, making it easy to follow and understand.\n2.By focusing on the CoT space, the paper offers a unique approach to protect knowledge bases." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a copyright protection method for knowledge bases in retrieval-augmented generation (RAG) for LLMs. It introduces a harmless watermarking approach to verify ownership without harmful effects, embedding traceable CoT-based behaviors that preserve correct outputs. Experiments on benchmark datasets validate the effectiveness and robustness against adaptive attacks of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The paper's motivation is unclear and requires further elaboration on the necessity of addressing the research problem, specifically to avoid generating incorrect answers during verification. Additionally, more practical and detailed descriptions of the security scenario under study should be provided.\n2.The method description lacks clarity. For example, Figure 1 is not adequately explained, and the process of optimizing the \"Watermark Phrase\" text based on Equations (2) and (3) needs more detail.\n3.The statement in line 110 appears to contain incorrect repetition." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses and below:\n\n1. Membership inference attacks (MIAs) can also be used to verify data ownership and are harmless, as they do not modify model outputs. Can they be adapted to achieve copyright protection for knowledge bases? For example, to determine whether a suspicious third-party LLM is augmented with their RAG knowledge, defenders could conduct MIAs on this LLM and analyze the results, as described in [1]. If so, what are the advantages of the proposed method over MIA-based methods?\n\n2. Does the defender need to know the suspicious LLM's retriever? Are the retrievers you considered in the evaluation (e.g., line 425) the ones you assumed for suspicious LLMs? What would be the effect if suspicious LLMs use other retrievers?\n---\n[1] Is My Data in Your Retrieval Database? MIAs Against RAG. Anderson et al., 2024." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper highlights the necessity of copyright protection for the knowledge base of RAG and, for the first time, proposes a harmless protection method.\n\n2. It identifies an under-explored approach to watermarking knowledge bases, specifically within the CoT (Chain-of-Thought) space." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a method to protect the copyright of knowledge bases. Since watermarking knowledge bases by directly modifying the final results could lead to harmful behaviors, the proposed method instead implants the verification within the chain-of-thought reasoning space, ensuring that the answers remain correct." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method may be unnecessarily complex, as it generates distinctive CoTs for verification questions with/without watermarks. If the issue with previous methods is that they could produce incorrect answers, why not follow prior poisoning-based methods and design objective or unusual questions that are rarely asked, implanting unique answers in the knowledge base?\n\n2. The proposed protection lacks robustness. With existing adaptive attacks, its accuracy drops to >0.52 and >0.38 (in Table 7). Why do you think the method still performs effectively in this case? What are the criteria? Isn't ownership verification a binary problem, i.e., the suspicious LLM either uses or does not use the protected knowledge base? In this case, random guessing would have an accuracy of 50%.\n\n3. The definition is not well-defined. Definition 1 aims to specify the degree of harmfulness but does not explicitly indicate which variable represents the degree.\n\n4. The threat model is problematic. It assumes that `adversaries intend to ‘steal’ and misuse the protected knowledge base released by the defender to enhance their LLMs without authorization.` Why would the defender release the protected knowledge in the first place? You may assume that a strong attacker can steal the entire knowledge bases instead of the defender release them.\n\n\n5. It contains many typos, e.g., `(watermark) phase(s)` and `retriver`." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- If the CoTs are incorrect, how does the paper address the potential risks associated with flawed reasoning, especially when CoTs may influence model interpretability?\n- What mechanisms are in place to ensure that inaccuracies in CoTs do not propagate to the model’s final answers?\n- Does the watermarking approach detect unauthorized use of the knoweldge base across different scenarios, such as pretraining, fine-tuning, and RAG?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Harmless Watermarking Approach: By embedding watermarking within the chain-of-thought (CoT) reasoning, the novel approach protects knowledge bases without impacting the accuracy or reliability of the language model’s output.\n\n- Effective Ownership Verification: The paper introduces a novel, hypothesis-test-guided method that can reliably identify unauthorized use of proprietary knowledge bases. This approach minimizes false positives and provides a robust mechanism for ownership verification.\n\n- Robustness Against Adaptive Attacks: Extensive testing shows that the method is resilient against adaptive attacks, demonstrating the method's strength in maintaining security even in adversarial settings. This makes the approach more practical for real-world applications.\n\n- Theoretical foundation and Experimental evidence: The paper combines a solid theoretical foundation with rigorous experimental validation on benchmark datasets" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a method designed to protect the copyright of knowledge bases used with LLMs in a way that doesn’t affect the accuracy of the LLM's answers by\n\n1. Safe Watermarking: By using the model's reasoning process (rather than changing final answers), the method adds a harmless watermark that helps detect if someone is misusing the knowledge base.\n2. Verification for Misuse: The method includes special phrases and questions to verify ownership and check for unauthorized use of the knowledge base.\n3. It has been tested on multiple benchmarks, proving it to be effective and resistant to various attacks.\n\nOverall, this work provides a safe way to protect copyrighted knowledge bases, supporting their secure use and sharing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- High-Level Contribution Obscured by Low-Level Details: The paper’s focus on intricate, lower-level details may overwhelm readers, making it difficult to clearly grasp the high-level contributions and overall impact of the work.- The method may lead to incorrect CoTs which is as undesirable as incorrect outputs\n- Risk of Generating Incorrect Chain-of-Thoughts (CoTs): The method’s reliance on modifying CoT reasoning rather than final outputs could lead to the generation of flawed or inconsistent CoTs. Since CoTs play a critical role in model interpretability, incorrect reasoning chains could be as problematic as inaccurate answers.\n- Lack of Clarity on Error Containment: The paper does not adequately explain how it ensures that any inaccuracies in CoTs do not propagate to final outputs.\n- Unclear Scope of Detection: It’s not clear whether the watermarking approach is effective across different types of uses of the knowledge base. such as for pretraining, finetuning as well as RAG." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- To clarify, does the proposed method produce watermarked CoT responses for all entries in the RAG database, just a subset of the entries, or create new entries irrelevant to the existing entries in the database? \n\n- For the Harmful Degree metric, is it then evaluated over just the chosen verification questions, or over the entire original database? If over the verification questions only, could the authors elaborate on disadvantages of directly inserting new fictitious entries as backdoor watermarking entries for verification?\n\n- Please provide results on the AUROC or TPR-FPR of the verification metrics of the various methods, especially since the proposed verification method involves using an LLM to evaluate.\n\n- Please elaborate on how such methods compare with direct text watermarking methods where the watermarks persists after the text have been used as in-context exemplars, making them applicable to the RAG setting. For example, the method proposed in [1]:\n\n\t[1] Lau et al, \"Waterfall: Framework for Robust and Scalable Text Watermarking and Provenance for LLMs\"\n\n- Have the authors evaluated the performance on benchmarks beyond factual Q&A and involving potentially some elements of reasoning, such that CoT may have an impact on benchmark performance?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper tackles an important problem of copyright protection, for the RAG setting which is becoming increasingly common in applications\n\n- The proposed method emphasizes minimal ham to the utility/fidelity of the model, by focusing on watermarking auxiliary information instead such as added CoT responses." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method to watermark a RAG database by implanting verification questions and watermarked CoT responses accompanying these answers. This would allow verification of whether the RAG database is used, by making queries on those specific questions and checking for the watermarked CoT responses in the LLM output." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Further elaboration on the practicality of the setting would be useful. It is not very clear why adding additional fictitious watermarked entries would not already satisfy the requirements of the setting, and if adversaries could edit the RAG database why the adversaries would not be able to remove all added CoT elaborations to the verification questions (or simply remove all such responses if only the verification questions have the added CoT elaborations)\n\n- The paper would benefit from adding discussion and comparisons with other related text watermarking works that are directly applicable to the considered RAG setting, as it may not be clear why additional customized methods would be needed for the RAG setting when direct text watermarking methods may already work. For e.g., the method proposed in [1]:\n\n\t[1] Lau et al, \"Waterfall: Framework for Robust and Scalable Text Watermarking and Provenance for LLMs\"\n\n- The paper should include additional analysis on the TPR-FPR or AUROC of the verification process, which is an important metric for watermark verification works.\n\n- The paper does not include results on the robustness against adversarial attacks, such as insertion/deletion/substitution or paraphrasing attacks to the retrieved entries from the RAG database prior to usage by the LLM, or after the response has been generated.\n\n- Overall, it would benefit the paper significantly if further details on the setting considered (specific threat model, practicality/realism of the setting), improved metrics (for both verification and harmful degree), and additional empirical results (e.g. from questions below and weaknesses listed here) are provided." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024ragc,\ntitle={{RAG}\\${\\textasciicircum}C\\$: Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3XTw909oXt},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) are increasingly integrated into real-world applications through retrieval-augmented generation (RAG) mechanisms to supplement their responses with up-to-date and domain-specific knowledge. However, the valuable and often proprietary nature of the knowledge bases used in RAG introduces the risk of unauthorized usage by adversaries. Existing methods that can be generalized as watermarking techniques to protect these knowledge bases typically involve backdoor or poisoning attacks, which introduce harmful behaviors (\\eg, generating incorrect outputs for verification), thereby compromising the LLM's reliability. To address these challenges, we propose \\name{} for harmless copyright protection of knowledge bases. Instead of manipulating the final output, \\name{} implants distinct verification behaviors in the space of chain-of-thought (CoT) reasoning, maintaining the correctness of the final answer. Our approach involves three main stages: (1) \\textbf{Generating CoTs}: For each verification question, we generate two CoTs, including a target CoT for building watermark behaviors; (2) \\textbf{Optimizing Watermark Phrases and Target CoTs}: We optimize them to minimize retrieval errors under the black-box setting of suspicious LLM, ensuring that the watermarked verification queries activate the target CoTs without being activated in non-watermarked ones; (3) \\textbf{Ownership Verification}: We exploit a pairwise Wilcoxon test to statistically verify whether a suspicious LLM is augmented with the protected knowledge base by comparing its responses to watermarked and benign verification queries. Our experiments on diverse benchmarks demonstrate that \\name{} effectively protects knowledge bases against unauthorized usage while preserving the integrity and performance of the RAG." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Copyright Protection", "Ownership Verification", "Retrieval-augmented Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bcafbc95ef1498cb93ef79fc02108ff7196eb322.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f581eb0700ad38299ed6da9fa3cb8228ec568748.zip" }, "title": { "value": "RAG$^C$: Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Xfa63ggsq
AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained Optimization
main
Active
Offline reinforcement learning;optimization;Implict Q learning;diffusion model
reinforcement learning
3;5;5
3;3;3
2;3;2
2;3;2
2;3;2
4.333333
3
2.333333
2.333333
2.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed method is derived rigorously.\n2. The experiment shows that the proposed method has good empirical performance compared with other baselines on standard benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper considers policy extraction problem, where sometimes in offline RL, existing algorithms only learn value function, and policy extraction problem is to find a policy that coorespond to the policy and does not perform OOD actions. The paper considers distilling policies from value functions learned with IQL algorithm, and propose the implicit policy-finding problem. The solution of the IPF problem leads to the proposed AlignIQL algorithm, from a careful derivation of the IPF formulation. In the experiment, the proposed method is compared with several baselines with competitive performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The formulation aims to use a general regularization function $f$, which is a good attempt. However, the remaining results seems to rely on the case that $f(x) = \\log(x)$. Does the result generalize to any other regularization function?\n2. Remark 5.7 seems very hand-wavy. How does the algorithm ensure that the action with the positive advantage is chosen? It does not seem to be reflected in the loss function. \n3. While the result in table 1 looks impressive, I am not sure if this can serve a strong evidence that the proposed method is better than AWR. The proposed method is equipped with diffusion policies, but the IQL (AWR) baseline seem to only use MLP so it might not be a fair comparison. \n4. The result in table 1 is missing standard deviation. \n5. The goal of section 6.2 is unclear. What is the baseline that is compared against in this section? \n6. Some minor issues: a) in eq. 1, is a $\\pi(a \\mid s)$ missing? b) in eq. 2, where is the $Q_{\\theta}$ from? It does not appear eq. 1. c) in eq. 7, the notation $a$ is overloaded in $a' \\sim \\pi(a \\mid s)$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper introduces a new approach to tackle the implicit policy-finding problem, combining theoretical rigor with practical effectiveness in offline RL.\n- The proposed algorithm, AlignIQL, performs well across varied tasks, demonstrating versatility and effectiveness across different offline RL benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes AlignIQL (in two versions) to address the implicit policy-finding problem. The authors formulate it as a constrained optimization problem and derive a closed-form solution. The performance of AlignIQL is competitive compared to the baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While AlignIQL is rigorous, it adds complexity to training by requiring additional multiplier networks and diffusion models, which may increase computational costs and sensitivity to hyperparameters. The scalability of the method is also a concern; can it be extended to image-based tasks?\n- The authors do not explain the use of diffusion modeling in the methods section.\n- The performance of AlignIQL raises some concerns:\n - The authors argue that MuJoCo tasks are already saturated for offline RL, which I agree with. However, AlignIQL's performance is also considerably worse than Diffusion QL and even worse than IQL in 4 out of 9 tasks. Given that AlignIQL consumes more computational resources, this discrepancy is problematic.\n - There is a significant performance difference between the authors' version and the original IDQL paper, which further leaves the reader uncertain about the supposed improvements in AlignIQL's performance.\n - The results were obtained using inconsistent hyperparameters, yet the authors under-analyze the ablation study and hyperparameter sensitivity.\n - The authors state, “Figure 2 shows that as training time increases, the performance of AlignIQL with different N converges to the same value, which shows that AlignIQL is insensitive to N.” This conclusion is not obvious from Figure 2. A clearer approach would be to report the mean and standard deviation of these scores.\n - The optimization techniques may risk overfitting in AntMaze environments with sparse rewards, potentially reducing generalization to new scenarios. More testing on sparse reward tasks would benefit this submission.\n- In this submission, \"policy alignment\" is defined differently from its use in language models. A more formal definition of “policy alignment” should be provided, or the authors could consider renaming it.\n- A minor issue: multiple duplicate references appear in the bibliography (e.g., lines 568-573). Additionally, lines 916-917 may contain editing errors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does AlignIQL perform in real-world environments with noisy or incomplete datasets? The paper evaluates performance on D4RL benchmarks, but it would be interesting to see how the method handles imperfect data.\n- What are the key factors that affect the alignment between the Q-values and the learned policy in AlignIQL? Understanding the sensitivity of the method to different alignment parameters could help clarify its robustness.\n- How does the computational complexity of AlignIQL compare to other state-of-the-art offline reinforcement learning methods in terms of training time and resource usage? This would help evaluate the method’s scalability for larger or more complex tasks.\n- Is the approach compatible with more advanced neural network architectures, such as transformers, for offline reinforcement learning? Could integrating more modern architectures improve its performance?\n- What are the potential limitations of applying AlignIQL to tasks outside of continuous control, such as discrete action spaces or hierarchical tasks? This could provide insight into the method’s broader applicability." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The introduction of AlignIQL as a constrained optimization approach represents a significant advancement in offline reinforcement learning, providing a fresh perspective on implicit policy extraction.\n- The empirical results demonstrate that AlignIQL and its variant achieve competitive performance across a variety of D4RL benchmarks, particularly in challenging tasks with sparse rewards, indicating the effectiveness of the proposed methods.\n- Theoretical Insights: The paper offers valuable theoretical analysis regarding the use of weighted regression for policy extraction, enhancing the understanding of the underlying mechanisms that contribute to the success of IQL methods.\n- By incorporating policy alignment constraints, the approach ensures that the extracted policies are both effective and representative of the learned Q-function, leading to improved stability and reliability in offline settings.\n- AlignIQL shows increased robustness to variations in hyperparameters compared to existing methods, which is crucial for practical applications where tuning can be challenging." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces AlignIQL, a novel approach to extracting implicit policies in offline reinforcement learning by formulating the implicit policy-finding problem as a constrained optimization problem. AlignIQL and its variant AlignIQL-hard leverage policy alignment constraints to ensure that the extracted policy reflects the learned Q-function while maintaining the advantages of decoupling the actor and critic in Implicit Q-Learning (IQL). The authors demonstrate that their method achieves competitive or superior performance on D4RL datasets, particularly excelling in complex sparse reward tasks like AntMaze and Adroit, while also being more robust to hyperparameter variations than existing methods. Additionally, the study provides theoretical insights into the conditions under which weighted regression can be effectively utilized for policy extraction in IQL. Overall, the proposed algorithms contribute to a better understanding of the bottlenecks in IQL-style methods and offer a more effective means for implicit policy extraction in offline reinforcement learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the experiments demonstrate competitive performance on specific D4RL benchmarks, the applicability of AlignIQL to other domains or more diverse environments may not be fully established, limiting its generalizability.\n- The proposed framework may introduce additional complexity in implementation compared to existing methods, which could deter practitioners who seek simpler solutions for offline reinforcement learning.\n- Although the paper includes comparisons with several baseline methods, it may benefit from a more comprehensive analysis against a broader range of state-of-the-art algorithms to fully contextualize its contributions.\n- The performance improvements may be contingent on the quality of the dataset used, raising concerns about the approach’s robustness in real-world scenarios where data can be noisy or incomplete.\n- The computational requirements for training AlignIQL could be higher than those of simpler methods, potentially limiting its scalability for larger-scale applications or real-time scenarios." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a new method (AlignIQL) to extract the policy from the IQL-style value function and explain when IQL can utilize weighted regression for policy extraction." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024aligniql,\ntitle={Align{IQL}: Policy Alignment in Implicit Q-Learning through Constrained Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Xfa63ggsq},\nnote={under review}\n}" }, "abstract": { "value": "Implicit Q-learning (IQL) serves as a strong baseline for offline RL, which never needs to evaluate actions outside of the dataset through quantile regression. However, it is unclear how to recover the implicit policy from the learned implicit Q-function and whether IQL can utilize weighted regression for policy extraction. IDQL reinterprets IQL as an actor-critic method and gets weights of implicit policy, however, this weight only holds for the optimal value function under certain critic loss functions. In this work, we introduce a different way to solve the $\\textit{implicit policy-finding problem}$ (IPF) by formulating this problem as an optimization problem. Based on this optimization problem, we further propose two practical algorithms AlignIQL and AlignIQL-hard, which inherit the advantages of decoupling actor from critic in IQL and provide insights into why IQL can use weighted regression for policy extraction. Compared with IQL and IDQL, we find that our method keeps the simplicity of IQL and solves the implicit policy-finding problem. Experimental results on D4RL datasets show that our method achieves competitive or superior results compared with other SOTA offline RL methods. Especially in complex sparse reward tasks like AntMaze and Adroit, our method outperforms IQL and IDQL by a significant margin." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Offline reinforcement learning", "optimization", "Implict Q learning", "diffusion model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1256b0e4bbfbf676b405cc191165f08db3a23050.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/0e468d14c672f932e03283aa2fd1494291d799b8.zip" }, "title": { "value": "AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained Optimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3YQYo1O01W
Insight Over Sight? Exploring the Vision-Knowledge Conflicts in Multimodal LLMs
main
Active
Multimodal Large Language Models;Knowledge Conflict;Diagnostic benchmark;Commonsense Knowledge
alignment, fairness, safety, privacy, and societal considerations
3;3;5
4;4;4
2;2;3
2;2;2
3;2;2
3.666667
4
2.333333
2
2.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why are the open-ended questions called “subjective”? They do not appear to be subjective at all. For example, Figure 9 shows a person with a paddle in his hands. Why is “playing a guitar” a subjective answer if it is objectively not true? Similarly, in figure 14, why is the answer to “where is a cook serving food?” subjective? It is quite clear the cook is standing in a bathroom.\n\n2. Section 3.4, what are the hyperparameters to reproduce the results in table 2? \n\n\n**Other**\n\n* The human-in-the-loop components in Figure 2 are not clear.\nAs a note, it would have been nice if there was an attempt to understand why MLLMs underutilize visual information, if you believe this is the case, but I believe the resource itself can be useful as is." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The topic is timely and important\n2. Once the issues below are addressed, ConflictVis can be a useful benchmark to test models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies knowledge conflicts in multimodal large language models (MLLMs). The authors propose a human-in-the-loop pipeline to create ConflictVis, a benchmark designed to elicit knowledge conflicts, comprising 374 examples. Each example consists of a generated image and four questions. The authors use ConflictVis to test nine MLLMs and show models overly-rely on textual inputs as well as their parametric knowledge. Finally, they propose Focus-on-Vision (i.e., the prompt “Please focus on the visual information”) to counter underutilization of visual information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In general, I find the premise of the paper to be good and interesting: Detecting knowledge conflicts or visual information underutilization is important and the structure of instances in ConflictVis is easy to understand. However, It is very much not clear to me why some questions are harder than others nor why these are the right questions to ask about the images. In section 3, the text omits a lot of detail and as a result, the conclusions are not convincing. \n\nThese weaknesses can be improved, but require major overhaul to section 3, and possibly to section 2.\n\n1. The paper jumps between textual context (i.e., the input text) and parametric knowledge, i.e., the information the model encodes, irrespective of any particular input. Sometimes it refers to them both as “underutilization of visual information”, which perhaps would have been the approach to take throughout the entire paper. But it doesn’t take the time to clearly distinguish between them, which makes it hard to follow (lines 473-499 shortly makes this distinction, but it is missing from the rest of the paper).\n\n\n2. **ConflictVis**\n- From my understanding, the method requires human evaluation every time it is used (lines 277-278). It is not clear to me why. Especially if the images are not the subject of evaluation, then the correct answers can be predetermined and be marked as part of ConflictVis. This way, you would only need an LLM to compare the output by the MLLM with the predetermined answer, and this step would be automatic.\n- There is no detail about what makes some questions harder than others, or why multiple difficulties are needed.\n\n\n**Substantial Details Missing in Experiments**\n- **Section 3.2 Clarity on MLLMs Output Comparison:**\n - The paper does not specify what the MLLMs outputs are compared against in the sanity test. It is assumed to be the yes/no answers from human annotations, but this is not explicitly stated.\n - **Suggestion:** Clarify the comparison benchmarks in the text or provide a reference to where those details can be found.\n\n- **Uncertainty Calculation and Aggregation (lines 301-314):**\n - The method of computing and aggregating uncertainty is not described. Additionally, details such as the number of samples from each benchmark, how these samples were selected, and the sampling method are absent.\n - **Suggestion:** Include a more detailed methodology or direct the reader to an appendix where these methods are outlined.\n\n**Clarity on MLLM Responses in Section 3.3 Without Images:**\n- **Context of Questions Without Images:**\n - It is unclear what kind of answers the MLLMs provide to questions containing determiners when no image is present to define the referent. For instance, the question posed in line 295, \"Is *the baby* on the bed fixing a computer?\", assumes knowledge of 'the baby' which hasn't been introduced.\n - **Potential Issue:** If an MLLM like GPT-4o rejects this question due to the lack of contextual introduction of 'the baby', and this is counted negatively, it suggests a design flaw in the experiment. The experiment should distinguish between a model's reliance on introduced contextual knowledge versus its parametric knowledge.\n - **Suggestion:** Clarify how responses are evaluated in the absence of images and consider revising the methodology to accurately test for over-reliance on textual versus visual information. This could involve a different scoring approach where the context provided by images is factored into the evaluation of responses.\n\n**Critique of Focus on Vision (FoV) Methodology**\n- **Inconsistency and Lack of Improvement (Table 2):**\n - The FoV approach, which merely prompts the model to focus on visual information, does not introduce a novel technique, as implied in the abstract and introduction. The data presented in Table 2 does not demonstrate consistent or meaningful improvement over the existing baselines. When improvements do occur, they are marginal.\n - **Implication:** If FoV had shown a significant performance gap over other baselines, it could have substantiated the paper's claims about the under-utilization of visual information in current models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can you clarify why model accuracy remains high on this dataset?\nAlthough you describe this task as more challenging than traditional VQA, the performance does not show a significant gap between them.\nIf the task could be more challenging, there might be more to analyze.\nFor now, the cause of this phenomenon can be easily attributed to the language bias because MLLMs rely more on the textual modality.\nIf you can introduce more diverse conflicts, you might be able to find out new problems in MLLMs.\n\n\n2. Why do you use the vicuna-13b for probability rather than larger or more powerful model.\nTo ensure the commonsense is embedded in the model, would it be better to train a model using a commonsense corpus?\n\n3. Could you explain why Yes/No questions have lower performance compared to other question types?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper studies an overlooked problem of vision-knowledge conflicts for MLLMs.\n\n2. The paper's generated images serve as a contribution to constructing counter-commonsense conflicts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the context-memory knowledge conflicts in MLLMs by constructing a counter-commonsense multimodal benchmark.\nThey generate images using less frequent commonsense triplets.\nThe results show that MLLMs have problems when facing counter commonsense visual information.\nThey also design a prompting strategy to mitigate the problem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed benchmark does not fully capture the severity of vision-knowledge conflicts, as GPT-4 achieves more than 90% accuracy, suggesting that more challenging scenarios might be necessary to evaluate SOTA models.\n\n2. The analysis of vision-knowledge conflicts remains relatively superficial. \nThe fundamental reason stated in the paper can be attributed to a long-standing common opinion that MLLMs have language bias, which is already pointed out by previous works.\n\n3. This work only investigate the counter-commonsense conflicts and does not explore other types of vision-knowledge conflicts, such as those involving factual conflicts and world-knowledge conflicts." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "what was the exact prompt in Section 3.3?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Offers a novel and well-defined benchmark (ConflictVis) with rigorous human-in-the-loop validation.\n- Good experiment setups including sanity check, comprehensive question type evaluation.\n- It's a well-structured, well-written, and easy-to-follow paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses vision-knowledge conflicts in multimodal large language models (MLLMs), where the model's commonsense knowledge can conflict with visual inputs, often leading to vision-irrelevant outputs. To tackle this, the authors introduce an automated pipeline and a new benchmark, ConflictVis, designed to create scenarios that test MLLMs on handling commonsense contradictions between text and visual information. The study shows that MLLMs frequently over-rely on parametric knowledge, especially for simpler questions, and introduces a \"Focus-on-Vision\" (FoV) prompting strategy to encourage MLLMs to prioritize visual data. Experimental results across nine models indicate that the FoV strategy enhances performance by reducing dependency on textual information in visually conflicting contexts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The practical relevance of these visual conflict scenarios in real-world applications is unclear. I don't think users would actually input counter-commonsense images, such as a baby on a bed fixing a computer in their daily lives. I would recommend using use-cases in WildVision[1] which collects real-world use-cases. Additionally, the reliance on a benchmark that emphasizes rare, contrived scenarios may not reflect typical user interactions with MLLMs, potentially limiting the benchmark’s broader applicability in evaluating MLLM performance.\n- In comparison to textual knowledge conflicts, the memorization effect here is relatively low and can be addressed with a simple prompt strategy, which reduces the significance of this issue.\n- The proposed FoV method, though effective, is a simple prompt adjustment that may not generalize across all multimodal contexts or complex use cases beyond commonsense conflicts. In fact, for all multimodal inputs, it seems intuitive that prompts should at least include “Based on the given image.” The limited utilization of visual information could be a result of poorly structured initial prompts used in Section 3.3. (Incidentally, what was the exact prompt in Section 3.3?). Thus, the degree of knowledge conflict may not be as serious as suggested by the authors.\n\nOverall, I have concerns on the generalizability of the findings and the practicality of the benchmark scenarios. While the work provides useful insights into handling vision-knowledge conflicts, the proposed solutions and evaluation settings may not align well with real-world usage or fully address the complexities of multimodal reasoning in MLLMs.\n\n[1] Lu et al., WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences, NeurIPS 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024insight,\ntitle={Insight Over Sight? Exploring the Vision-Knowledge Conflicts in Multimodal {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3YQYo1O01W},\nnote={under review}\n}" }, "abstract": { "value": "This paper explores the problem of commonsense-level vision-knowledge conflict in Multimodal Large Language Models (MLLMs), where visual information contradicts model's internal commonsense knowledge (see Figure 1). To study this issue, we introduce an automated pipeline, augmented with human-in-the-loop quality control, to establish a benchmark aimed at simulating and assessing the conflicts in MLLMs. Utilizing this pipeline, we have crafted a diagnostic benchmark comprising 374 original images and 1,122 high-quality question-answer (QA) pairs. This benchmark covers two types of conflict targets and three question difficulty levels, providing a thorough assessment tool. Through this benchmark, we evaluate the conflict-resolution capabilities of nine representative MLLMs across various model families and find a noticeable over-reliance on textual queries. Drawing on these findings, we propose a novel prompting strategy, \"Focus-on-Vision\" (FoV), which markedly enhances MLLMs' ability to favor visual data over conflicting textual knowledge. Our detailed analysis and the newly proposed strategy significantly advance the understanding and mitigating of vision-knowledge conflicts in MLLMs.\nThe data and code will be released." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Models", "Knowledge Conflict", "Diagnostic benchmark", "Commonsense Knowledge" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9e7be44d49c631fe75bd7731fc81f3bf0be22dd5.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Insight Over Sight? Exploring the Vision-Knowledge Conflicts in Multimodal LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3Z2flzXzBY
Selective Label Enhancement Learning for Test-Time Adaptation
main
Active
label enhancement;test-time adaptation;distribution shift
transfer learning, meta learning, and lifelong learning
5;5;5;6;8
2;3;3;4;5
3;2;3;3;3
2;2;2;2;3
3;2;2;3;4
5.8
3.4
2.8
2.2
2.8
0.908108
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the Weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method is supported with a theory guarantee.\n2. The experiment is diverse datasets, which confirm the effectiveness of the proposed methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new pseudo-learning algorithm that combines one-hot label learning and candidate label learning approaches. Each learning paradigm is conducted on its respective sample set, referred to as the certain set for one-hot label learning and the uncertain set for candidate learning. The key distinction from other pseudo-label learning papers is that the authors propose a theoretical guarantee to ensure that the selected labels in the pseudo-set will correspond to the ground truth if certain conditions are met (Proposition 1). In the initial learning stages, the model focuses more on candidate set learning and gradually shifts toward minimizing the one-hot label loss as it updates more on target samples. The authors provide a theory indicating that the generalization bound becomes tighter as more target samples are incorporated. Experimental results demonstrate the algorithm's performance compared to other TTA learning methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The reviewer's main concern is the novelty of the proposed approach: adapting pseudo-learning and candidate learning is already popular in TTA and domain adaptation, as the authors discussed in Section 2. The main novelty here comes from Proposition 1, where the authors propose to ensure the correctness of pseudo labels under specific assumptions. The condition is that the learned weight and the optimal one need to be close enough (the closeness is measured by the difference in the probability of each class in the input samples, and $\\tau(r)$ is the threshold). The selected pseudo labels are considered true when this condition is met. However, how can we ensure that this condition is always satisfied? If the reviewers understand correctly, this condition is based on the threshold $\\tau(r)$, which is initialized to 1 and then gradually reduced to a specific value. When $\\tau(r)$ is smaller than 1, how can we ensure that the distance between the learned models and the optimal one is smaller than this threshold?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- In the proposed method, why the authors reduced to improve the reduced threshold could improve the reliability of pseudo labels.  \n- In the experiments, why did the authors only adopt online test-time adaptation approaches as the baselines?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Instead of assigning definite pseudo-labels to test samples, candidate pseudo-label sets are assigned to uncertain ones via selective label enhancement.\n- The proposed method partitions test samples into confident and uncertain subsets based on the model’s predictive confidence scores, with confident samples receiving one-hot pseudo-labels, and uncertain samples being assigned candidate pseudo-label sets\n- The theory establishes a generalization bound for TTA that by incorporating a greater number of target domain samples with effective supervision, a tighter generalization bound can be achieved." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies Test-time adaptation (TTA), which aims to adapt a pre-trained model to the target domain using only unlabeled test samples. The authors proposed a new TTA framework, which assigns candidate pseudo-label sets to uncertain ones via selective label enhancement. The model is progressively trained on certain and uncertain pseudo-labeled data while dynamically refining uncertain pseudo-labels, leveraging increasing target adaptation monitored throughout training. Experiments on various benchmark datasets validate the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In the proposed method, the authors need to provide more details about the reduced threshold to improve the reliability of pseudo labels.  \n- Why use image corruption datasets to validate the effectiveness of the proposed method? 15 types of common image corruptions should be shown clearly.\n- This paper uses a vanilla variant that all samples annotated with candidate pseudo-labels sets excluded from model updates to demonstrate the effectiveness of the candidate pseudo-labels sets of the proposed method. More detests could be added in this ablation experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tSince there are many approaches for creating pseudo-label candidate sets, has the paper compared its method with other approaches for selecting pseudo-label candidates? Does this method have any unique advantages specifically for the test-time adaptation (TTA) task? Or is it also applicable to semi-supervised or unsupervised tasks?\n2.\tWhat is the buffer size used in the experiments? Was there any ablation study conducted on the buffer size? If the buffer were removed, would this method still be effective?\n3.\tIn the experiment section, why do ERM and T3A perform so poorly on CIFAR10-C and CIFAR100-C? In the original papers and subsequent TTA studies, their performance was not as weak." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe writing of this paper is clear, and both the problem definition and the method description are well articulated.\n2.\tThe motivation behind the proposed label enhancement method is reasonable, and there is substantial theoretical analysis provided.\n3.\tThe experiments in the paper are relatively thorough, demonstrating the superiority of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The problem studied in this paper is the conventional test-time adaptation. When assigning pseudo-labels to test samples, the paper assigns one label to samples with high confidence, while assigning a candidate set of labels to less confident samples. It uses a buffer to store samples that could not be labeled, allowing the model to attempt labeling them in subsequent batches. Finally, the model is updated by using cross-entropy with the one-hot encoded pseudo-labels. The effectiveness of the method is validated across multiple datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tDespite the relatively comprehensive theoretical analysis, the design of the method in this paper is overly simplistic. Similar approaches using candidate pseudo-label sets have long existed in the field of semi-supervised learning.\n2.\tThe maintenance of this buffer seems somewhat unfair. If a sample’s label remains undecided for an extended period, it will be repeatedly seen by the model in subsequent iterations. Although the buffer size imposes some constraints, the repeated processing of test samples could still introduce bias. Additionally, maintaining a buffer incurs significant overhead. If the buffer becomes too large, the number of samples to be predicted in each batch will be dictated more by the buffer size than by the batch size itself." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The main concerns are as follows:\n1. The novelty of this paper should be emphasized more clearly. From the motivation perspective, dividing confident and non-confident samples and applying progressive training is a fairly conventional approach. Similar ideas have been extensively used in domain adaptation (DA) problems, and several papers in TTA focus on pseudo-labeling. The authors should pay more attention to these closely related works to highlight the novelty of this paper better.\n2. The generalization error bound provided is a little general and offers limited guidance for the current problem. Based on the motivation of the paper, if the so-called more effective supervised information can be quantified? If pseudo-label error terms or confidence levels could be incorporated, it would help reveal how the label-generation process impacts generalization performance, thereby offering more practical insights. Additionally, how is the divergence term in the bound reduced in this paper? How does it influence pseudo-labeling and progressive adaptation?\n3. Regarding the experimental setup, the datasets used in this paper differ from those employed in previous methods. The rationale for these choices should be explained in detail. Furthermore, for certain methods with the same settings like PROGRAM, why do the results differ from the original paper when using the same benchmark and backbone? Could it be due to different settings or other reasons? This should be clarified in the paper, as such vague experimental setups and comparisons make it difficult for readers to accurately assess the actual performance of the method.\n4. The paper lacks ablation studies to evaluate the effectiveness of each module. Additionally, since the proposed method is an online model, time efficiency is an important metric that should be discussed, especially considering the additional computational overhead introduced by the approach.\n5. I am also curious about the sensitivity of the threshold selection strategy. It doesn’t seem highly sensitive, but how does it perform over a broader parameter range or with different thresholding strategies? This could be a point worth discussing in the paper.\n\nIf the authors can adequately respond to these concerns, I would consider increasing my score." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "TTA is a critical research area for machine learning models to adapt to distribution shifts in real-world scenarios, particularly with wide applications in fields such as autonomous driving and medical image analysis. This article enhances model performance on TTA issues by dynamically adjusting pseudo-labels and capturing their uncertainties. Additionally, the detailed derivation of the generalization error bound in the article offers theoretical guarantees." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article proposes a method for addressing the TTA problem by dividing confident samples from uncertain samples and progressively updating pseudo-labels, alleviating errors caused by unconfident pseudo-labels in TTA scenarios. The article provides a systematic and comprehensive theoretical generalization error bound and validates its effectiveness on multiple benchmark datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper lacks sufficient persuasive experimental evidence, suggesting the need to include relevant ablation studies, discussions on time overhead, and the rationale behind experimental settings. The novelty of the paper is not distinctly highlighted, requiring a more detailed discussion of the differences from related methods. The theoretical guidance is relatively weak; exploring the quantification of pseudo-label errors could strengthen the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. PASLE effectively partitions test samples into confident and uncertain subsets, improving labeling accuracy for uncertain samples.\n\n2. The model is trained iteratively on both certain and uncertain pseudo-labeled data, enhancing adaptation capabilities over time.\n\n3. The paper establishes a generalization bound that suggests increased supervision from target domain samples can lead to improved model performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the Progressive Adaptation with Selective Label Enhancement (PASLE) framework for test-time adaptation (TTA). Unlike traditional methods that assign definite pseudo-labels, PASLE assigns candidate pseudo-label sets to uncertain test samples while providing one-hot labels to confident samples. This approach allows the model to adapt progressively, refining the uncertain pseudo-labels based on the model's evolving understanding of the target domain." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I am confused about some notations in theorem 1, what is the specific meaning of d_{h, H}(S, T)? It seems that d_{h, H}(S, T) is a constant with your algorithm. Does theorem 1 show the superiority of your algorithm because there is always a constant gap between the \\epsilon_T(\\hat{h}) and \\epsilon_T (h_T^*)?\n\n2. Whether using the two hyper-parameter to control the iteration is reasonable in equation 9? Since different datasets have different parameters and there is no prior knowledge to guide us in choosing suitable parameters, making hard to achieve the best results\n\n3. More experiments about the sensitivity of the τ_start, τ_end, and batch size on other datasets are expected to be seen." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose the progressive adaptation with selective label enhancement framework for test-time adaptation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024selective,\ntitle={Selective Label Enhancement Learning for Test-Time Adaptation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3Z2flzXzBY},\nnote={under review}\n}" }, "abstract": { "value": "Test-time adaptation (TTA) aims to adapt a pre-trained model to the target domain using only unlabeled test samples. Most existing TTA approaches rely on definite pseudo-labels, inevitably introducing false labels and failing to capture uncertainty for each test sample. This prevents pseudo-labels from being flexibly refined as the model adapts during training, limiting their potential for performance improvement. To address this, we propose the Progressive Adaptation with Selective Label Enhancement (PASLE) framework. Instead of definite labels, PASLE assigns candidate pseudo-label sets to uncertain ones via selective label enhancement. Specifically, PASLE partitions data into confident/uncertain subsets, assigning one-hot labels to confident samples and candidate sets to uncertain ones. The model progressively trains on certain/uncertain pseudo-labeled data while dynamically refining uncertain pseudo-labels, leveraging increasing target adaptation monitored throughout training. Experiments on various benchmark datasets validate the effectiveness of the proposed approach." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "label enhancement", "test-time adaptation", "distribution shift" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d31254698ee3fe141fdb32d64b6e928f67899678.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Selective Label Enhancement Learning for Test-Time Adaptation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3ZDMQGQgkE
Preference Discerning in Generative Sequential Recommendation
main
Active
Generative Retrieval;Sequential Recommendation;Preference Discerning;LLM
generative models
3;3;3;5;6
4;4;4;4;4
3;3;2;3;3
2;2;1;4;3
2;3;3;2;2
4
4
2.8
2.4
2.4
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to weaknesses" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-\tThis paper uses four diverse datasets to ensure the generalizability and reliability of the experimental results.\n\n-\tThe proposed model, Mender, is evaluated across multiple dimensions such as preference-based recommendation, sentiment following, fine-grained and coarse-grained steering, and history consolidation. The results show that Mender significantly outperforms existing state-of-the-art methods, particularly in preference guidance and sentiment following, demonstrating its robustness and effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to enhance personalized recommendations by explicitly incorporating user preferences and historical interactions. The proposed method Mender uses a pre-trained language model to generate user preferences from comments and integrates these preferences with historical data using cross-attention mechanisms. Experimental results show that Mender outperforms existing state-of-the-art methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tThe methodology section should be reorganized to provide a detailed explanation of the preference generation process. Mathematical formulations are expected to be included for explicit understanding, and pseudo-code is recommended to enhance clarity and reproducibility.\n-\tIt is kindly recommended to add further discussion about how does the benchmark generation benefit personalization modeling." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the major technical contribution of MENDER? \n2. Are there some direct and objective evaluation methods to check the effectiveness of preference-discerning results? \n3. What is the motivation for using the generative recommendation pipeline? \n4. Is MENDER a efficient method compared with traditional sequential recommendation baselines as SASRec?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Novel Benchmark. The authors suggest a new benchmark to evaluate the personalization ability of the user preference description. \n- Abundant Results. Extensive experiments are conducted. \n- Credible Reproduction. Reproduction details are available in Appendix." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes MENDER, aiming to enhance the personalization of sequential recommendation models. Specifically, the authors first design a preference discerning paradigm based on zero-shot LLMs. With the obtained user preference, the authors construct the generative recommendation framework based on RQ-VAE. Extensive experimental results are provided to show its effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Limited Technical Contribution. Overall, the framework is constructed based on existing works. The proposed method, MENDER, mainly consists of two modules, RQ-VAE and feed-forward components (Emb or Tok). Other researchers have suggested both modules, which may indicate the limited technical contribution of this work. \n- Lack of the Cross-validation on Benchmark. The suggested \"holistic\" benchmark is subjective and not double-checked by objective ground truth. The overall performance only reflects the indirect effectiveness of additional preference summarization, while the success of preference discerning should be further validated. \n- Inadequate Motivation. The motivation for enhancing the personalization and applying the generative recommendation is not supported. How do authors define \"personalization\" and examine \"personalization\"? Why do authors only construct the generative model? Can we integrate MENDER with discriminative models? \n- Unknown Efficiency. The efficiency of the proposed framework has not been tested.\n\nMinor problems:\n- The last block of Table 2 is in the wrong format. \n- Code is not available." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why do you think the five factors in your \"evaluation benchmark \" are equally important and are the only concerns by sequential recommendation systems?\n\nWhat is the difference between your proposed preference generation framework with existing ones (as listed in \"weakness\" section)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors would like to propose a thorough evaluation framework (the authors claim it to be a \"benchmark\") for sequential recommendation system, which is a nice direction.\n\n2. The experimental results are credible and show improvement compared with some existing baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The submission focuses on sequential recommendation technique. The main contributions lie that (1) the authors proposed a LLM-based user preference generation method based on user generated reviews and (2) they propose an evaluation framework that contains five different aspects that should be taken into consideration by sequential recommendation systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty is rather limited. Extracting user preference information from their reviews is not novel. However, these existing works are not mentioned or compared by the authors. Some examples include:\n\nUser-LLM: Efficient LLM Contextualization with User Embeddings, https://arxiv.org/abs/2402.13598 (The work investigates how to capture latent user behaviors into preferences)\nDo LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction, https://arxiv.org/abs/2305.06474 (This work investigates how LLMs comprehend user preferences and compares the performances of different LLMs in this issue.)\nReview-driven Personalized Preference Reasoning with Large Language Models for Recommendation, https://arxiv.org/abs/2408.06276 (this work proposes to extract subjective preferences from raw reviews, which is a key contribution the authors claim)\n\n2. The proposed evaluation benchmark is rather straightforward and not so reasonable. This kind of framework should be validated by either product managers or large scale user studies. The radar chart indicates that these five dimensions are equally important, which is also not validated by any evidence. If the authors would like to propose such a framework, I would suggest compare the proposed one with actual user experiences through practical user studies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Given the results of Mender$_{Tok}$-Pos-Neg in Figure 5, achieving the best sentiment following results does not necessarily ensure the model's proficiency in other dimensions. Does the pursuit of high performance in sentiment following adversely affect the model's overall capabilities?\n2. Why is the time factor not considered, given that user preferences can change significantly over time, especially considering that the time span of the datasets can be decades? What's the implications of not considering the time factor in the current model?\n3. In Section 4, Equation 1 already takes every item in the sequence except the last item $i_{T_{u}}$ into account, then what's the meaning of \"repeating this generation process for each item in $s_{u}$\" in line 203?\n4. In Section 4.1, line 237, why the steering ability can be achieved by creating new sequences? Is there a reference can prove this? In Appendix D.2, line 1276, a figure reference is missing.\n5. Still in Section 4.1, line 241, how is $p_{1}$ and $p_{2}$ achieved? What's the point of combining them with new sequences?\n6. Still in Section 4.1, line 243, since $\\hat{i}_t$ represents a distinct item, why its sequence combines with $p_1$, which represents the preference of a similar item?\n7. Why the NDCG results of sentiment following are not provided in Table 2 and other tables in the Appendix?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper identifies a critical issue in sequential recommendation: the failure to explicitly capture and utilize user preferences, prompting the introduction of a new task: preference discerning.\n2. It proposes a novel benchmark comprising five key dimensions to evaluate the preference discerning abilities of existing models.\n3. The paper enhances the RQ-VAE framework by directly representing both user interaction history and preferences, while also introducing two variants that encode inputs in different ways.\n4. Extensive experiments are conducted, accompanied by detailed analysis to validate the findings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new benchmark and proposes a novel method called Mender to evaluate and enhance the preference discerning capabilities of sequential recommendation systems. The benchmark assesses models across five dimensions, focusing on their capacity to extract and utilize user preferences from datasets. Recognizing that existing methods lack key capabilities of preference discerning, the authors propose Mender, a multimodal generative retrieval approach which effectively extracts user preferences and achieves state-of-the-art performance on the proposed benchmark. Experimental results further demonstrate the effectiveness of the method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. No NDCG results for sentiment following are reported in Table 2 or the Appendix. I assume the results are close to zero, suggesting that a logarithmic scale should be used to better analyze the discrepancies between the models. \n2. The time factor is not considered, given that user preferences can change significantly over time, especially considering that the time span of the datasets can be decades. Simply limiting the user sequence to the 20 most recent items does not fully eliminate time bias. Instead, the time interval of user-item interactions should be restricted during sampling to better capture user preferences.\n3. The methodology outlined in Section 4 is unclear, with similar issues arising in Section 4.1 on Fine-Grained & Coarse-Grained Steering, where the concepts are not adequately explained. In Equation 1, the entire sequence is considered, while the subsequent statement describes repeating the process for each item in the sequence, leading to ambiguity. Furthermore, in Fine-Grained & Coarse-Grained Steering, there is no reference provided to justify the validity of this approach. The sequence processing in this section also lacks rationality, as it combines a distinct item $\\hat{i}_t$ with $p_1$, which represents the preference of a similar item. \n4. Experiments with only three baselines is not convincing enough, new baselines should be added, including https://arxiv.org/abs/2311.09049 (Zheng, Bowen, et al. \"Adapting large language models by integrating collaborative semantics for recommendation.\" ICDE 2024). This paper also employs RQ-VAE, in which LLM-encoded text embedding of the item is utilized as input." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The motivation of this work, leveraging user preference in recommender system, is good.\n2. The author conducts extensive experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper first introduces a new benchmark to evaluate the model's ability to capture user preference. Then Mender is proposed to integrate LLM-generated user preferences to enhance the generative recommender system. Experiment results on its proposed benchmark show improvement on the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Regarding Benchmark Design:\n\n1. While preference-based recommendation is undoubtedly a core aspect, the practical value of the tasks such as Sentiment Following, Fine-Grained & Coarse-Grained Steering, and History Consolidation is questionable. This raises concerns about the overall contribution of the benchmark.\n2. The Fine-Grained & Coarse-Grained Steering task is confusing. The paper states, “we associate these two items with different user preferences, denoted as p1 and p2, respectively,” but the relationship between p1, p2, and similar or distinct items is unclear. How are “different user preferences” determined? Additionally, in the new sequences created, why is p1 added to the sequence of very distinct items while p2 is added to the ground truth item sequence? This contradicts the earlier association of p1 and p2 with similar and distinct items, respectively. What role does the similar item play?\n3. The design of the Sentiment Following task does not adequately reflect the model’s ability to follow sentiment. The description is also unclear, and I suggest the authors reorganize this section.\n4. The practical value of History Consolidation is questionable, and its evaluation metric seems unnecessary. Why not train the model directly using the five preferences? The paper claims to “infer which preference is most relevant for predicting,” but there is no experimental evidence demonstrating this capability. In fact, the performance with multiple preferences is even worse than with a single preference.\n5. The experimental discussion on each task, particularly Sentiment Following and History Consolidation, is insufficient.\n\nRegarding Presentation:\n1. Missing reference on line 1275.\n2. Typo on line 1167: \"tr iggered\" should be \"triggered.\"\n3. Typo on line 401: \"48.3.4%\" should be corrected.\n4. Method names are displayed incorrectly in lines 403-406.\n5. In Table 2, the performance drop for History Consolidation on the Steam dataset seems miscalculated. The relative decline should be based on the better-performing Mender-emb, not Mender-Tok.\n6. The section titled \"PREFERENCE DISCERNING\" in part three should likely be part of the Methodology (Section 4.2). It is unclear why this is presented as a separate section.\n\nRegarding Experiments:\n1. The selection of baselines is insufficient, with only three included. One of these, TIGER, is an ID-based model that does not leverage preference, making the comparison unfair. The two VocabExt variants either introduce a gap between randomly initialized item embeddings and semantic information, or they lack pre-trained preference understanding, making them variants of TIGER rather than fair comparisons. The authors should consider two sets of baselines: (1) preference-based recommendation models and (2) advanced TIGER variants, such as LETTER, LC-Rec.\n2. The statement in line 282, “Mender-Emb allows pre-computing item and preference embeddings, resulting in improved training efficacy,” conflicts with the experimental results, as Mender-Emb consistently underperforms compared to Mender-Tok in Table 2.\n3. Although the benchmark is a key contribution of the paper, there is insufficient discussion of most tasks in the experimental section, especially History Consolidation and Sentiment Following.\nThe lower performance of History Consolidation compared to Recommendation raises questions about the usefulness of combining five preferences versus a single preference. This casts doubt on both the validity of the preference design and the method’s ability to effectively leverage preferences. Additionally, the abnormal results on the Steam dataset lack sufficient discussion and explanation." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a new paradigm called preference discerning along with a benchmark and a new baseline and evaluate capabilities of state-of-the-art generative retrieval methods" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024preference,\ntitle={Preference Discerning in Generative Sequential Recommendation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ZDMQGQgkE},\nnote={under review}\n}" }, "abstract": { "value": "Sequential recommendation systems aim to provide personalized recommendations for users based on their interaction history. To achieve this, they often incorporate auxiliary information, such as textual descriptions of items and auxiliary tasks, like predicting user preferences and intent. Despite numerous efforts to enhance these models, they still suffer from limited personalization. To address this issue, we propose a new paradigm, which we term *preference discerning*. In *preference discerning*, we explicitly condition a generative sequential recommendation system on user preferences within its context. The user preferences are generated by large language models (LLMs) based on user reviews. To evaluate *preference discerning* capabilities of sequential recommendation systems, we introduce a novel benchmark that provides a holistic evaluation across various scenarios, including preference steering and sentiment following. We assess current state-of-the-art methods using our benchmark and show that they struggle to accurately discern user preferences. Therefore, we propose a new method named Mender (**M**ultimodal prefer**en**ce **d**iscern**er**), which improves upon existing methods and achieves state-of-the-art performance on our benchmark. Our results show that Mender can be effectively guided by human preferences, paving the way toward more personalized sequential recommendation systems. We will open-source the code and benchmarks upon publication." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Generative Retrieval", "Sequential Recommendation", "Preference Discerning", "LLM" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0dd9882c0a8967228ca276ee46b44b0ce6f352ff.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Preference Discerning in Generative Sequential Recommendation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3ZdGSTxKuy
What can we learn from Harry Potter? An Exploratory Study of Visual Representation Learning from Atypical Videos
main
Active
Open-world learning;Out-of-distribution detection;Video classification
unsupervised, self-supervised, semi-supervised, and supervised representation learning
1;1;3;3
5;4;5;4
1;3;2;2
1;1;1;2
2;3;3;3
2
4.5
2
1.25
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- it appears that atypical video data is useful for OOD, and the attempted OE-baed methods. However, it seems that the data and methods presented in the work are independent of videos and could be adequately demonstrated in NLP, audio, or image domains as well. Why is the focus solely on video?\n- The results show that there is no convergence. From the results in Fig. 4 and Fig. 5, it is evident that increasing the number of atypical categories can improve performance; why not continue to add more categories?\n- The new data quality is only 5486. If the dataset increases by one order of magnitude, what would the result be?\n- Regarding the atypical data distribution, quantity, categories, or other attributes, how should we define their quality? This work does not provide clear experimental conclusions. Therefore, this is an unfinished task, and I am unsure whether my understanding is correct." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The paper is well-written and very easy to understand.\n+ The experimental results of the paper are very good compared to the baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the impact of atypical video data on representation learning for open-world discovery. A new dataset featuring diverse unusual video types is introduced to enhance model training. The study demonstrates that incorporating atypical data improves out-of-distribution detection performance, especially when the categorical diversity of samples is increased." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experimental results are insufficient.\n- There is a lack of insight regarding the core atypical data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How was the gaussian noise dataset generated? What was the original pixel values that were perturbed with gaussian noise? Is it gaussian noise applied on any of the existing dataset? \n 2. How are the atypical-n categories (n=2,3) selected to finetune? Is there any motivation behind selecting certain combinations and not others?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The first paper to introduce a dataset containing atypical videos in sci-fi and animation category." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a novel dataset containing atypical videos across 4 categories - sci-fi, animation, unintentional actions and anomalies, to fine-tune ResNet3D-50’s out of distribution detection capability. They found that introducing more categories of atypical videos further boost performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Very limited experiments - fine-tuning only vanilla ResNet, with one in-distribution dataset and showing improvement on that is not enough at all. There are a lot of strong models in existing literature that do OOD detection with high robustness to outliers. To show effectiveness of the proposed atypical dataset, need a much more extensive experiments on stronger models and more in-distribution datasets.\n\n 2. Missing quantitative evaluations - Randomly combining some of the 2,3 categories of atypical dataset does not give any meaningful result. To get a more meaningful performance, need to show all combinations of categories. Moreover, the mean result across datasets is not a meaningful quantitative performance because of the difference in data distribution, performance across different such datasets cannot be averaged. \n\n 3. The generation of the dataset is not well motivated enough. Sci-fi and animation data is non-existent in real-world scenario, so having these as OOD samples and claiming it will generalize open-world OOD detection better is too far-fetched and not supported by quantitative evaluation. Fine-tuning the model on only these categories has worse performance than baseline (Figure 4, Table 4), which again proves that introduction of these samples are not helping the model in any way. \n 4. The dataset statistics is incomprehensive - important explanation about how videos were selected for unintentional and abnormal category from existing datasets, how frames were sampled, why the number and video length of unintentional category is much higher than others etc is missing. These important details about the skew in data distribution might drive a better analysis of performance for this category.\n 5. The effect of fine-tuning with Gaussian noise, diving48 and K400 is not well explained. No extensive analysis provided on those datasets about how they are not enough and why atypical is a more effective OOD dataset than these for outlier exposure? Moreover, fine-tuning with Diving48 already gives much better performance than fine-tuning with atypical dataset. This invalidates the effectiveness of the proposed atypical dataset. \n 6. Formatting and readability issues - what most of the symbols denote is not mentioned in table captions. Redundant figures (figure 4 and 5) that provide no new information. Extremely small font on figures and placement issues hamper readability. Moreover, baseline performance not being present in Table 3, 4, 5 causes severe readability issues." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Line 147: Does the frequency of the OOD class within the testing dataset make a difference here? Typically, OOD for new classes means that the class has multiple examples within the test dataset while a kind of anomaly only has one or very few. \n\nThe atypical data here seems to be similar to the known unknowns from Terry Boult’s work (Abhijit Bendale, Terrance E. Boult; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1563-1572). How are you distinguishing from previous works like this and why are you renaming it to atypical? Even in Hydrics works, they call it outlier exposure. Why are you renaming it here?\n\nHow are you ensuring that the activities within the unseen data are not within the other parts of the dataset? While you look at categories in the appendix (glad to see it), how are you avoiding very similar or the same action labeled differently or how some activities aren’t labeled within the atypical datasets?\n\nSince you are training on more data, isn’t this an unfair comparison with the other methods?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The method tests a strategy known to work in other problems such as text and image classification on video classification to show that it works with their new dataset. \n\nIt’s nice to see different experiments on how much different outlier methods work to see how each supporting dataset separately contributes to accuracy. \n\nThe paper is easy to read and clear on what they are doing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The approach creates a video dataset combining already released video datasets and using them as known unknowns, the authors call atypical videos, for OOD classification. The authors use the method from Hendrycks et al on using this new dataset as an outlier exposure / known unknown dataset. The authors present ablation studies on how the different known datasets help with outlier detection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major:\nThe paper is lacking in novelty and is applying known methods on known datasets. This would fit better in an applications track at a conference rather than a general research track since there isn’t much novel about the method or the datasets. This rise to the level of novelty required to be published at ICRL or similar conferences. \n\nAuthors need to cite Terry Boult’s work where “atypical” are called “known unknowns” and aid in detection and have been around even before this works cited here: Abhijit Bendale, Terrance E. Boult; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1563-1572 \n\nEquation 2 has many undefined elements that are crucial to understanding the work. What is LOE? This equation is taken from the Hendrycks paper but you didn’t include any of the accompanying references to Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. International Conference on Learning Representations, 2018 who came up with the loss you are using here. These need to be included to make this understandable. \n\nWhy did you stop at OOD rather than do OSR? What is the benefit of not classifying the known data? It would be interesting to explore if using this “atypical data” would hurt the known class classification to explore the tradeoffs with this kind of data. \n\nThe noise to create OOD is very close to many adversarial work to show robustness or to attack networks. For example: Jiang, Linxi, Xingjun Ma, Shaoxiang Chen, James Bailey, and Yu-Gang Jiang. \"Black-box adversarial attacks on video recognition models.\" In Proceedings of the 27th ACM International Conference on Multimedia, pp. 864-872. 2019. This is related to this approach since you are using this type of noise to determine OOD. \n\n\nMinor:\nLine 126ish: OSR has an OOD problem within it. OSR is a two step process where the first step is to do OOD and then, if from a known class, classify it. OOD could be considered an anomaly detection task as well though your definition above (Line 143) says that you are more focused on class labels. \n\nFigure 4, please add horizontal lines.\n\nLine 269, you are saying it is difficult but that means it is possible. Are you actually stating this is possible for real-world applications?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What is the contribution of your work?\n\nWhy did you only evaluate the effect of pairs of data sources, and not individual data sources?\n\nWhat's the protocol for combining UCF with OOD data (e.g. is there sample balancing)? How was this protocol selected? Is it optimal for all studied data sources?\n\nDo the conclusions generalize to modern model architectures (transformers)? Do they generalize to large scale datasets (e.g. using Kinetics400 as source, rathe than UCF-101)?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The observation that training a model to recognize out-of-distribution samples on more out-of-distribution samples improves its test time performance makes sense. \n\nThe paper is readable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors explore ways to improve out-of-distribution sample recognition (OOD) in action classification by exposing the model to diverse, out-of-distribution video samples at training time. In particular, they follow the approach of Hendrycks et al., and fine tune a pre-trained action classifier to produce uniform distribution between the classes on out-of-distribution samples. At test time, a sample a predicted as being out-of-distribution if its maximum softmax probability is bellow a certain threshold (i.e. the model is sufficiently confused, so to speak ). The contribution of this work is in comparing a few sources of out-of-distribution samples used at training and showing their effect on the models test time performance. In particular, they compare several existing datasets (Kinetics400, Oops by Epstein et al. that focuses unintentional action outcomes, a combinations of a few anomaly detection datasets as well as sci-fi and animation videos collect by the authors). The setup considers a 3D-CNN pre-trained on UCF and out-of-distribution samples come from other datasets, like MiT-v2. The results demonstrate that in this setting the combination of Oops data and either Sci-Fi or animation data does marginally better than the more conventional Kinetics400 data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the paper is readable, the writing quality is low (grammatical mistakes, convoluted writing). Overall, the presentation quality is low (organization of the manuscript, completeness of the captions, notation, clarity, etc.). \n\nThe contribution is overclaimed in the abstract/introduction. The paper only show results on OOD (and even that in an extremely narrow setting) but claim a contribution to \"visual representation learning in the open world\".\n\nThe proposed \"atypical\" video dataset is mostly a combination of existing, public datasets. The authors collect some videos from Sci-Fi movies and animation, but seems like they will not be able to share this data (it is collected from YouTube and Hollywood movies neither of which allow re-distribution by 3rd parties). As such, there is no dataset contribution in the paper.\n\nThe dataset collection protocol is not described in sufficient detail (How the \"super-categories\" were selected? How the individual samples were selected for each \"super-category\"?) Also, the dataset is too tiny to support any representation learning claims (fewer than 6k videos).\n\nOverview of existing video datasets doesn’t include the most recent, large-scale efforts (e.g. WebVid).\n\nLots of important details are missing or aren't clearly described. For example, the notation is incomplete/inconsistent: L_OE is not defined (which is the key objective in the approach), the original loss is denoted inconsistently in the equations and in the text. The notation in Table 3 and Figures 4, 5 is not defined. Gaussian noise dataset is not described in sufficient detail, which dataset is used as an original to add noise to? How exactly the amount of noise to add is determined? For some reason a new outlier dataset (diving48) is introduced inside the experiments section. It is unclear how the outlier samples are introduced during fine-tuning (e.g. is there some sample balancing between outlier and in-distribution samples?).\n\nOutlier exposure datasets are either much larger (Kinetics400) than the in-distribution UCF-101 dataset or comparable in scale (proposed Atypical), which is not a realistic scenario. Nota that these datasets need to be labeled with action categories, because they cannot include samples from the training distribution. In practice, in a representation learning scenario, one would want to use the vast majority of the labeling effort for in-distribution data.\n\nIt is unclear why the evaluation of the effect of each datasource in Table 3 only considers pairs of data-source, and never reports the effect of each individual data source separately. On the same note, to fairly compare individual data sources, their size has to be made uniform first. Otherwise it is impossible to claim that the largest source (e.g. Oops) leads to better results because of its content, not simply because of its larger scale.\n\nThe biggest issue with this work is that the contribution seems to be minimal, if it exists at all. Is it in the observation that more diverse OOD data during training helps to better detect OOD samples at test time? This is hardly surprising/novel. Moreover, the experimental setting is too narrow to make even this unoriginal conclusion. Strictly speaking, this paper shows that using Opps + data which is very different in appearance from standard action recognition datasets (e.g. animation) is (slightly) better than using Kinetics400 when trying to learn OOD detection on UCF. And even this narrow conclusion is not clearly established because the experimental setup is somewhat flawed (see comments above). No recipy for automatically collecting/selecting useful OOD training data is provided so it is unclear how to generalize this approach to other scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024what,\ntitle={What can we learn from Harry Potter? An Exploratory Study of Visual Representation Learning from Atypical Videos},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ZdGSTxKuy},\nnote={under review}\n}" }, "abstract": { "value": "Humans usually show exceptional generalisation and discovery ability in the open world, when being shown uncommonly new concepts. Whereas most existing studies in the literature focus on common typical data from closed sets, and open world novel discovery is under-explored in videos.\nIn this paper, we are interested in asking: \\textit{what if atypical unusual videos are exposed in the learning process?}\nTo this end, we collect a new video dataset consisting of various types of unusual atypical data (e.g. sci-fi, animation, etc.). To study how such atypical data may benefit representation learning in open-world discovery, we feed them into the model training process for representation learning. Taking out-of-distribution (OOD) detection as a task to evaluate the model's novel discovery capability, we found that such a simple learning approach consistently improves performance across a few different settings. Furthermore, we found that increasing the categorical diversity of the atypical samples further boosts OOD detection performance. These observations in our extensive experimental evaluations reveal the benefits of atypical videos for visual representation learning in the open world, together with the newly proposed dataset, encouraging further studies in this direction." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Open-world learning", "Out-of-distribution detection", "Video classification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/40ccfa3c17f3508cbc6eb32e9e670fc2cad61c3d.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "What can we learn from Harry Potter? An Exploratory Study of Visual Representation Learning from Atypical Videos" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3b9SKkRAKw
LeFusion: Controllable Pathology Synthesis via Lesion-Focused Diffusion Models
main
Active
data synthesis;diffusion models;cardiac MRI;lung nodule CT;segmentation
applications to physical sciences (physics, chemistry, biology, etc.)
6;6;6;8
4;5;3;5
3;4;3;3
3;2;2;3
3;4;1;3
6.5
4.25
3.25
2.5
2.75
0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please revise Figures 1 and 2 to more clearly illustrate the novelty of your proposed approach. Rather than emphasizing the strengths of the paper or incorporating numerous elements into a single pipeline, focus on presenting a straightforward and cohesive pipeline that highlights the mechanisms unique to your method." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This manuscript is well-motivated, and the experimental results are satisfactory." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript presents a diffusion model that utilizes forward-diffused backgrounds and reverse-diffused foregrounds as inputs, allowing the model to concentrate on reconstructing lesions specifically. Additionally, a post-processing method is applied to enhance generation quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are several concerns regarding this manuscript:\n\n* The novelty of the proposed approach is limited. The method does not significantly modify the underlying conditional diffusion process but instead introduces variations solely in the input.\n* Figure 2 lacks clarity, and it would be beneficial to include the lesion-focused loss in this figure for a more comprehensive understanding.\n* The writing lacks organization and is difficult to follow, which may impede readability and comprehension." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. In the tables (e.g. table 1), what do you mean by the significantly adverse/positive effects denoted by red/blue? Could you please clarify this in the text as well via a small note in the table caption(s)?\n2. My suggestion: move image quality assessment quantitative results in the appendix (Table A2) to the main text if you have room. These are important metrics. You can shorten the related works to make space, that section doesn't need to be quite so extensive (or some of it could be moved to the supplementary).\n - Also, why didn't you evaluate unpaired perceptual metrics like FID, KID (https://arxiv.org/abs/1801.01401), SWD (https://arxiv.org/abs/1710.10196) etc.? the first two may have limitations for this task given that they use pretrained natural image features, but despite this they are still commonly used metrics for generative medical image models. I would consider adding these for future work, and also explaining why they are not used (particularly for the wider ICLR audience).\n3. For the multiclass lesion case/-J model, did you study how performance/generation quality scales with adding more classes? This point may be a bit moot given how small the changes in performance were measured after adding the channel decomposition module to the base model, but I'm still curious." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Major\n1. The paper is polished, well-written and well-presented. Topics and concepts are organized and presented in a digestible fashion.\n2. Overall, decent technical novelty. This incorporates many techniques which all come together to result in a strongly-performing methods, some pre-existing (such as combined noised backgrounds with denoised foregrounds), and some seemingly novel (such as histogram-based textural control). Also, despite the many components, the approach still seems relatively watertight because these additions are all pretty lightweight/simple (a good thing). No requirement for an additional network or something of that sort.\n3. Overall, results are strong. Clear improvements over baseline methods is basically all cases, using reasonable metrics. They also study a range of training settings, which is good. Clear improvements over Cond-Diffusion, which would be the naïve approach that many would think of first trying for this task; the limitations of it as discussed in the introduction are clear from the experiments.\n4. They also have fairly extensive ablation studies for their method, which is important given the number of components that they propose using. There are still a few related questions that I have, but they are minor.\n5. In general, the evaluation is fair and appropriate. The datasets are challenging benchmarks, and I think two is sufficient given the wide range of experiments completed on them. There is also a good number of baseline models, especially considering that this task is relatively niche, so the methodological baselines that they compare to seem strong.\n\nMinor\n1. The motivation for this problem is clear: pathological subjects are indeed rare, especially for screening populations. Your survey of the limitations of existing lesion synthesis approaches also supports the motivation; for example, they result in low quality backgrounds, they lack precise control over generated lesions, etc.\n2. The use of a histogram representation to condition the model on may seem too reductive for some applications, but it seems to work well here (makes sense given the clear correspondence between histogram shape/number of peaks and generated lesion morphology shown in Fig. 3), supported by the clear improvement to your method that including the -H module produced." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a latent diffusion model-based method for inserting lesions into healthy medical images while also providing an accompanying mask. They utilize a number of additions to their model to address limitations of prior work or naïve approaches to this task (both pre-existing and seemingly novel), such as combining forward-diffused backgrounds with reverse-diffused foregrounds, introducing intensity histogram-conditioning to the diffusion model to control lesion texture, as well as techniques for further control of the shape, size etc. of the generated lesion. They evaluate their method for a variety of experimental scenarios on 3D cardiac MRI lesion and CT lung nodule generation, showing that their technique results in noticeable improvements to existing approaches with respect to using their generated data to train downstream task segmentation models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major\n1. Some limitations of impact/scope: This task is clinically important but still fairly niche in medical image analysis, which itself is fairly niche within general machine learning and computer vision. The method (and task itself) also requires that dataset used needs the required annotations, which many medical datasets may not possess, and can be expensive/time-consuming to acquire. Overall, these limit the impact of the work somewhat, in the context of an ML conference at the level of ICLR, compared to a venue a bit more niche like MICCAI.\n\nMinor\n1. The benefits from using multi-channel decomposition (comparing the \"-J\" to no \"-J\" variants of your model in Table 2) are quite small. Can you provide some analysis or discussion of why this is the case, even if just hypothesizing? (However, I am guessing that the computational requirement to adding this component is practically negligible, so there is not really any harm in including it even if it results in only a very small performance improvement.)\n2. You state in the abstract that synthesizing multi-peak and multi-class lesions is a \"major challenge\" I agree with the multi-peak case given how much your histogram-conditioning improved the generation of such lesions, but based on your channel decomposition module's only very small improvements to performance, I'm unsure if generating multi-class lesions could not already be done well by prior methods. Could you clarify this/point to your results that support this, and/or provide quantitative evidence that multi-class synthesis is challenging for prior approaches?\n\nTo summarize, the paper is methodologically solid, with some technical novelty, and demonstrates clear improvements to prior techniques for lesion generation tasks in medical images via well-designed experiments and baselines. However, the main limitation is just that the task is relatively niche within medical image ML, which makes it more niche within general ML, and so may be less impactful at a venue like ICLR as opposed to a medical imaging-focused venue such as MICCAI or MIDL. Still, these limitations do not take away the good things about the paper (of which there are many), so I vote for a marginal accept." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "For specific questions please refer to the points made in weaknesses." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Lesion generating models are tools with significant potential for mitigating bias in medical vision AI algorithms concerning lesion detection, segmentation and quantification. Advancements in this topic should be highlighted in venues like this. \nThe manuscript is sufficiently well written, all the provided Figures/Tables are insightful and adequately formatted. \nThe choice of a 3D method for this inpainting problem is most adequate for CT and MRI. In these modalities, clinical lesion analysis workflows depend on the visualisation of multiple affected slices and 2D slice-wise inpainting methods would lead to slice-wise discontinuities. \n\nThe proposed method is sufficiently contextualised in the Introduction and Related work sections, where the reseach gap is clearly defined. Beyond that, this gap is empirically demonstrated by experimenting with state-of-the-art approaches (Cond-Diffusion variants and RePaint). \n\nThe proposed methodologies are thoroughly evaluated through comparisons with multiple other approaches, focusing on visual inspection of inpainted lesions (including comparison with real lesions) and their their downstream usability for training segmentation models. The latter evaluation used two different segmentation models, which contributes to the robustness of the findings across different segmentation training strategies. In addition, evaluating the approach on both MRI and CT datasets, ensures that the findings are not only applicable to one imaging domain. \n\nThis paper provides multiple key contributions which not only address the research gap but also deal with modality specific challenges related to lesion texture and shape heterogeneity. The corresponding claims are well supported by the results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel 3D lesion inpainting method, LeFusion, which uses diffusion models to address data scarcity in medical imaging. Its primary aim is to generate synthetic lesions in lung CT and cardiac MRI scans for augmenting training data in lesion segmentation tasks. The approach is validated through both visual quality assessments and data augmentation derived segmentation \n\nperformance improvement. Three key contributions can be summarised below: \nLeFusion Model: The authors identify that existing lesion inpainting methods struggle to preserve anatomically accurate backgrounds alongside the inpainted lesion, remarking that modelling the former is both hard and unnecessary. LeFusion is introduced to address this challenge incorporating two distinct features: (a) Training on a lesion focused diffusion loss, which only considers the lesion region. (b) Preserving the background at inference time with RePaint [1] by generating the lesion separately, while integrating forward-diffused background contexts into the reverse diffusion process. This design yields realistic lesions, better preserved backgrounds and improves data augmentation outcomes in both CT and MRI compared to non-lesion-specific models (Cond-Diffusion) both with and without RePaint based sampling. \n\nModality-Specific Variants: Two specialized variants are introduced to address modality-specific challenges. LeFusion-H uses histogram-based conditioning to capture diverse lesion textures in CT, succesfully solving the texture mode collapse observed for the baseline LeFusion. LeFusion-J models multiple tissue subtypes in MRI via multi-channel decomposition, which enables the joint generation of different lesion tissue types typically observed in cardiac lesions. Both variants demonstrate superior data augmentation effectiveness in their respective modalities. \n\nDiffMask for Mask Generation: All variants of LeFusion rely on either existing real masks or handcrafted ones as priors for generating lesions in healthy scans. As a more flexible alternative, DiffMask is a diffusion model that generates synthetic lesion masks from basic spatial constraints, defined as a sphere with user specified location and size. Using the generated masks for data augmentation leads to the largest improvement in segmentation performance relative to the baseline in both CT and MRI." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While S4, the Introduction and Background sections seem to imply that the proposed lesion focused loss is a novel contribution proposed for the first time by the authors. This might not be necessarily true considering that there have been other works that employ similar approaches [2, 3]. While few and perhaps not as thoroughly evaluated, mentioning them could further strengthen the contextualisation of the approach. \n\nThe description of the RePaint method in the experimental section implicitly suggests it consists of Cond-Diffusion using the RePaint [1] inference scheme. If that is the case it should be mentioned explicitly, if not then it should be better described. \nIn the segmentation experiments, it is understood that masks priors for generating lesions in healthy scans (N’) are either derived from real masks, handcrafted or generated by DiffMask. However, additional information should be provided on how exactly the conditioning histograms in this N’ setting are selected when using LeFusion-H variants. \n\nRegarding DiffMask, the definition and role of boundary mask is not very clear. From Figure 4, it is presumed that it corresponds to the bounding box defining the volume crop centred on the lesion. However, the statement “The boundary mask removes areas outside the boundary at each diffusion step” challenges this concept. Further clarity on this point would be appreciated. Furthermore, it is only implicit, that the DiffMask takes the CT/MRI volume crop as an input in addition to the conditioning control sphere. Section 3.3. should be updated to enhance clarity on all these aspects. \n\nAdding supplementary details on how the model training and checkpoint selection was conducted for the RePaint, Cond-Diffusion, Cond-Diffusion (L) would improve transparency. \n \n[Minor]\t \nMore detail on the dataset preprocessing would be beneficial for further reproducibility. A mention to the volume resolution is particularly lacking. \n\nThe choice of the specific crop-size could be further supported on previous work, for instance [4]. In addition, while not critical for acceptance, it would be interesting to study its effect over the results and would maybe answer the question: “How much local context is it necessary to generate realistic lesion?” \n\nWhile the purpose of the inpainted lesions is for downstream model training, further validating them using a radiologist would safeguard from potential biases that the generative model might be introducing the lesions. \n\nWhile describing Tables 1 and 2 it would be useful to clarify what is considered as “significant”. Since no standard deviations were provided, it is implied that these results were obtained for a single fold, so the concept of significance here is vague. In addition, while S5, the robustness of these findings to the specific data split could still be reinforced by adopting some sort of cross validation strategy. \nThe authors left unclear whether the segmentation model was trained on the volume crops centred on the lesion or on the entire scans. From using the Copy-Paste method in the evaluation, the latter is presumed but it is not explicitly mentioned. \n\nIn the cardiac MRI experiments, the LeFusion baseline of modelling the two lesion tissue types with separate models is mentioned as LeFusion in Table 2 but as LeFusion-S in Figure 5 and in the Appendix. It is suggested that the authors stick to one terminology. \nAs a work mainly focusing on specific diffusion model mechanics for improved lesion inpainting, it makes sense that the evaluation focus on comparing different diffusion based methods. That said, it would still be interesting to see how GAN based approaches like [4, 5] would fair in this comparison. \n \nReferences: \n[1] Lugmayr, Andreas, et al. \"Repaint: Inpainting using denoising diffusion probabilistic models.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. \n[2] Hansen, Colin, et al. \"Inpainting Pathology in Lumbar Spine MRI with Latent Diffusion.\" arXiv preprint arXiv:2406.02477 (2024). \n[3] Rouzrokh, Pouria, et al. \"Multitask brain tumor inpainting with diffusion models: A methodological report.\" arXiv preprint arXiv:2210.12113 (2022). \n[4] Yang, Jie, et al. \"Class-aware adversarial lung nodule synthesis in CT images.\" 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE, 2019. \n[5] Wu, Linshan, et al. \"FreeTumor: Advance Tumor Segmentation via Large-Scale Tumor Synthesis.\" arXiv preprint arXiv:2406.01264 (2024)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The reverse diffusion sampling process is not clearly defined; it appears to rely solely on the transformation in Equation (1), without detailing the sampling process or providing theoretical justification for omitting it.\n2. Although the background from forward diffusion is used as the background in the reverse sampling process, and the loss constraint is applied only to the lesion area, how is continuity and smoothness ensured in the intersecting regions between the lesion and background?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The overall paper structure is clear and well-expressed.\n2. A novel diffusion model is redesigned from the perspective of high-fidelity background preservation, with two texture generation control techniques developed to address multimodal and multiclass issues.\n3. The comparative methods are recent benchmarks from the past two years, making the results highly convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on generating lesion-containing images from healthy images to address challenges in downstream segmentation tasks, such as real-world data scarcity and long-tail distribution issues. Previous research on medical image synthesis has primarily concentrated on lesion generation design, often overlooking high-fidelity background preservation. The authors propose a lesion-focused diffusion model, LeFusion, which maintains high-fidelity background by integrating the background from forward diffusion into the reverse diffusion process, thus simplifying the learning process and improving output control. Additionally, two effective strategies are introduced: histogram-based texture control and multi-channel decomposition to address the two main challenges in lesion texture synthesis: 1) multimodal and 2) multiclass lesions. The paper is well-written, with comprehensive experimental comparisons." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.There is a lack of detail on implementation specifics (such as the sampling process) and theoretical support for the method.\n2. Analysis and discussion on the continuity at the fusion boundaries between lesion and background are missing, as well as the impact on downstream tasks." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose LeFusion, a lesion-focused diffusion model that synthesizes diverse lesion image-mask pairs from lesion-free images, enabling controllable multi-peak and multi-class lesion generation, significantly improving segmentation models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024lefusion,\ntitle={LeFusion: Controllable Pathology Synthesis via Lesion-Focused Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3b9SKkRAKw},\nnote={under review}\n}" }, "abstract": { "value": "Patient data from real-world clinical practice often suffers from data scarcity and long-tail imbalances, leading to biased outcomes or algorithmic unfairness. This study addresses these challenges by generating lesion-containing image-segmentation pairs from lesion-free images. Previous efforts in medical imaging synthesis have struggled with separating lesion information from background, resulting in low-quality backgrounds and limited control over the synthetic output. Inspired by diffusion-based image inpainting, we propose LeFusion, a lesion-focused diffusion model. By redesigning the diffusion learning objectives to focus on lesion areas, we simplify the learning process and improve control over the output while preserving high-fidelity backgrounds by integrating forward-diffused background contexts into the reverse diffusion process. Additionally, we tackle two major challenges in lesion texture synthesis: 1) multi-peak and 2) multi-class lesions. We introduce two effective strategies: histogram-based texture control and multi-channel decomposition, enabling the controlled generation of high-quality lesions in difficult scenarios. Furthermore, we incorporate lesion mask diffusion, allowing control over lesion size, location, and boundary, thus increasing lesion diversity. Validated on 3D cardiac lesion MRI and lung nodule CT datasets, LeFusion-generated data significantly improves the performance of state-of-the-art segmentation models, including nnUNet and SwinUNETR." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "data synthesis", "diffusion models", "cardiac MRI", "lung nodule CT", "segmentation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5fec9c6ea8d35de29ed952210d0c7908958f8395.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "LeFusion: Controllable Pathology Synthesis via Lesion-Focused Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3baOKeI2EU
UniCoTT: A Unified Framework for Structural Chain-of-Thought Distillation
main
Active
Chain-of-Thought; Structural Thought; Distillation; Unified Framework
applications to computer vision, audio, language, and other modalities
5;6;6;8
4;2;3;4
3;3;3;3
2;3;2;3
2;3;2;3
6.25
3.25
3
2.5
2.5
0.207514
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Why focus on using an encoder as the student model?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The writing is clear and easy to follow, with a well-defined motivation for the research.\n- The distillation framework proposed is innovative, especially in using a graph structure to represent different chains of thought and introducing corresponding training methods.\n- The approach is extensively tested on multiple benchmark datasets, demonstrating strong empirical performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on distilling the reasoning capability, specifically chain-of-thought reasoning, from large language models into smaller models. Specifically, the paper uses prompts to guide a larger teacher model to generate multiple explanations, or \"thoughts,\" for given questions and answers. These explanations are represented in a graph structure. Then, the small student model is trained using traditional cross-entropy loss along with a novel structural consistency loss and supervised contrastive loss proposed by the authors." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The framework mainly focuses on distilling explanation and reasoning abilities into base models like BERT. A concern is the limited application scope of such encoder-based models. To further validate the effectiveness of the proposed distillation framework for reasoning abilities, it would be interesting to distill the chain-of-thought reasoning from larger models into smaller decoder-based models and test them on complex reasoning tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How does the performance of UniCoTT scale with the size and complexity of the knowledge to be transferred? Are there diminishing returns as the complexity increases?\n2. What are the limitations of the current implementation of UniCoTT, and how might these be addressed in future work?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces a unified framework that handles diverse structural CoTs, which is a significant advancement over existing methods that focus solely on chain structures.\n2. The authors provide extensive experimental evidence to support the effectiveness of UniCoTT, showing improvements across different NLP tasks and datasets.\n3. The consideration of structured reasoning pathways (tree and graph) in addition to chains is a strength, as it better captures the complexity of human reasoning processes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel framework for transferring the reasoning capabilities of large language models (LLMs) to small language models (SLMs) through a structured chain-of-thought (CoT) distillation approach. The authors propose UniCoTT, which considers diverse structural CoTs (chain, tree, and graph) and employs two core strategies: iterative construction for structured CoTs and a structural constraint strategy. The framework aims to address the challenges of ensuring the rationality of generated explanations and ignoring diverse structures of CoT during knowledge transfer. The experimental results demonstrate significant performance improvements of SLMs on multiple NLP tasks across various datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper could benefit from a discussion on the computational complexity of UniCoTT and its scalability, especially when dealing with very large datasets or more complex reasoning tasks.\n2. The construction of UniCoT relies on APIs of LLMs, which may not be accessible or feasible in all situations. The paper could address potential alternatives or mitigation strategies. Besides, SLMs usually refer to small language models, e.g., 2B and 3B. The authors mainly conducted experiments on BERT and RoBERTa, which were not convincing enough.\n3. While the results are promising, the paper primarily focuses on question-answering and NLU tasks. It would be beneficial to see how UniCoTT generalizes to other types of tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The authors list \"hallucinations\" as one of the major drawbacks of previous works, and motivate the design of UniCoTT in ``introduction'' section. I am wondering how the designed UniCoTT framework helps to alleviate this issue.\n2. In lines 385-386, why $\\alpha$ and $\\beta$ is set to 0.5 and 0.2 respectively? Is it an intuitive trial or a result of a grid search?\n3. It would be interesting to test the annotation efficiency of CoT with the teacher model. An empirical conclusion of how many annotations are enough for great distillation performance would be insightful." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method is technically sound and intuitively makes sense. It is very interesting to transfer the knowledge from structured CoT texts into smaller models that can leverage rationale knowledge in a unified manner.\n2. Experimental results on several benchmarks show some improvement upon baselines and the authors conduct extensive ablation studies and analyses on various design choices.\n3. The paper itself is generally well-written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes UniCoTT, a unified distillation framework to transfer the diverse reasoning structures with CoT to smaller language models such as BERT and RoBERTa. Firstly, UniCoT is proposed as a unified bridge of various CoT structures, which is constructed by iteratively prompting LLMs to produce explanations with correct answers. After that, a node-level supervised contrastive loss and a structural consistency loss are designed as part of the training objective. Experiments on multiple reasoning datasets verified the effectiveness of UniCoTT by surpassing the baseline methods by a large margin." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The generalizability of KNIFE is yet to be known. The proposed framework is only verified in multiple-choice datasets. Whether it could be extended to other task settings like text generation remains a concern.\n2. The process of iteratively constructing UniCoT is hard to understand from the main body of the current version. I would suggest the authors move some content from the appendix to the main body. Meanwhile, it would be helpful if the authors could provide some overall statistics on the constructed UniCoT. For example, the averaged nodes and edges of the structure." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "No" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Extends CoT reasoning with diverse structures, which broadens the reasoning capabilities of SLMs.\n- Implements structural consistency and contrastive learning, effectively aligning SLMs with complex CoT reasoning paths.\n- Demonstrates superior performance on multiple tasks, showing effectiveness and generality in knowledge transfer." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces UniCoTT, a teacher-student framework aimed at transferring complex reasoning abilities from large language models (LLMs) to smaller language models (SLMs). UniCoTT extends traditional chain-of-thought (CoT) reasoning by leveraging diverse structured reasoning paths, such as chains, trees, and graphs, within a unified distillation process. This approach involves iterative CoT construction, node-level supervised contrastive learning, and structural consistency learning to reinforce reasoning capabilities in SLMs. Experimental results on factual reasoning, multiple-choice QA, and natural language understanding tasks demonstrate that UniCoTT outperforms existing methods, enhancing SLM performance across several benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "UniCoTT’s increased complexity and computational requirements could make real-world deployment challenging. To be fair, as distillation strategy proposed in this paper uses three types of reasoning and more compute to create dense supervision. The baselines like CoT may also uses more compute like more chains in self-consistency to increase the quality of distillation data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024unicott,\ntitle={UniCo{TT}: A Unified Framework for Structural Chain-of-Thought Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3baOKeI2EU},\nnote={under review}\n}" }, "abstract": { "value": "Chains of thought (CoTs) have achieved success in enhancing the reasoning capabilities of large language models (LLMs), while their effectiveness is predominantly observed in LLMs. \n Existing solutions methods adopt distillation to inject chain-of-thought capabilities into small models (SLMs).\n However, they: \n (1) can not guarantee the rationality of the generated explanation due to hallucinations; \n (2) ignore diverse structures of CoT during knowledge transfer.\n In this paper, we propose a unified CoT distillation framework termed UniCoTT for considering diverse structural CoTs (\\emph{i.e.}, chain, tree, and graph).\n UniCoTT contains two core strategies: iterative construction for structured CoTs and the structural constraint strategy.\n Specifically, UniCoTT prompts LLMs to iteratively produce accurate explanations with answers and unifies structured explanations as UniCoT which is seen as a bridge for knowledge transfer.\n Furthermore, UniCoTT utilizes the proposed unified supervised learning and structural consistency learning strategies to transfer knowledge of structured CoT to SLMs. \n Experimental results show that UniCoTT can significantly improve the performance of SLMs on multiple datasets across different NLP tasks. **Our code is available in our supplementary materials.**" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Chain-of-Thought; Structural Thought; Distillation; Unified Framework" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/90e43aca40591b1c70a80cf49361f0c89b0c9236.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/4bf4f65d981ae8e34236b92c7751e256eb21e476.zip" }, "title": { "value": "UniCoTT: A Unified Framework for Structural Chain-of-Thought Distillation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3bcN6xlO6f
Video Action Differencing
main
Active
Video;Actions;Differencing;Zero-shot;benchmark
datasets and benchmarks
5;5;5;6
4;5;4;4
3;2;2;3
2;4;3;3
3;1;2;4
5.25
4.25
2.5
3
2.5
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "see above" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed task has not been explored, which has key applications for some scenarios in real life.\n2. The construction process for the dataset is technically sound with different splits.\n3. The visulizations are clear and interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Video Action Differencing, a novel task of identifying subtle differences between videos of the same action. It also introduces a new benchmark sourced from mutliple video datasets with new annatations. A new method is proposed for this new task with state-of-the-art performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weakness and questions:\n1. Do the authors consider factors like fps for videos, which may impact the restults of answering questions like \"the speed of the arms is faster\" for distinguishing videos A and B.\n2. For the open-set benchmark, have the authors analyzed the reasons for why QWen2-VL performs so worse?\n3. Have the authors visualized the selected frames by the frame localizer compared to the ground-truth frames? What's about the effects of frame localizer compared to the ground-truth frames?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to weakness for more details." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The method is shown to outperform baseline large multimodal models by systematically isolating action differences through a three-stage approach, excelling in both closed and open settings.\n- The introduction of benchmark, with extensive annotations across varied domains (e.g., fitness, surgery, sports), provides a unique and structured dataset for fine-grained video action comparison.\n- Evaluations and ablation studies demonstrate the robustness and effectiveness of the method, especially in tasks that require precise frame alignment and action differentiation.\n- The proposed task and methods address real-world challenges in skill-based learning environments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a method and dataset designed to compare subtle action differences across video pairs. Their method, VidDiff, uses a three-stage process to generate, localize, and verify these differences using multimodal models.\n\nMain Contributions:\n\n- A Dataset includes 557 video pairs across domains like sports, fitness, and surgery, annotated with over 4,700 fine-grained action differences. The dataset is designed to help models learn and evaluate nuanced action comparisons.\n- A Framework uses a three-step pipeline to identify differences: (1) generating potential differences with a language model, (2) aligning frames between videos using CLIP and Viterbi algorithms, and (3) verifying differences with a vision-language model.\n- A method is compared with leading multimodal models (e.g., GPT-4o and Gemini-1.5 Pro), showing improved performance in closed-set settings. This comparison also highlights challenges in current models for frame alignment and action detail recognition." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The performance of the leading multimodal models on the dataset is not clearly demonstrated, and examples comparing success and failure cases across models would enhance understanding of their effectiveness.\n- The Difference Proposer stage’s reliance on large language models (LLMs) may introduce biases or inaccuracies, especially when generating action differences for complex or nuanced tasks. Providing more details on the generated proposer queries and their corresponding ground truth labels would enhance clarity.\n- Although the multi-stage approach is well-structured, it presents a risk of error propagation. Inaccuracies in early stages could impact the final outputs, potentially reducing overall reliability, particularly in the open-set task, where the Difference Proposer’s effectiveness is not fully evaluated.\n- While the paper introduces a detailed taxonomy for annotations, the reasonableness of some annotations remains unclear. For example, the “surgery_0” annotation includes the description \"the movements in video A are more efficient than in video B,\" which lacks a concrete definition and could be interpreted inconsistently even by human evaluators. Scaling this annotation approach to larger datasets or adapting it to new domains could also present significant challenges.\n- Minor issue: The table mentioned as Table 7 in Section 5.3 is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Does the sampling of pairs of videos mean that these might not contain the same action (see above)? Or the actions are sampled first to ensure a wide range of comparison difficulty over actions before video pairs are sampled within action?\n2. Were the annotators skilled/experts/knowledgeable in the actions that they were annotating? Or was this found to not be that important given the annotation task? Additionally, how many total annotators were used and were they renumerated for their time?\n3. Has the potential bias of the video pairs in the closed task been checked to ensure that naive performance should be 50% instead of video A (or B) occurring as the answer more than 50% of the time. Additionally, I would be interested to know if candidates which could be categorised as C (for insignificant differences) can be understood by the model as this would also increase the naive difficulty to 33% before taking into account a non-uniform distribution.\n4. The evaluation protocol for the open-set task seems like it could include errors/inconsistencies depending on the LLM output. Has there been any investigation into this and how much it differs per run and how much it aligns with a human? Currently, the prompts are also given in the appendix with little to no discussion as to why these prompts were chosen, if they were developed over multiple iterations to find the best performing prompt, etc.\n5. Did the easy/medium/hard classifications align with experts' opinions for each of the actions? It would be good to know the types of actions that are classed as easy/medium/hard as these are not present within the paper as far as I could tell. It's not clear why an LLM was chosen to do this task.\n6. Could more qualitative results and statistics be provided about the dataset? For example, there is very little in the paper regarding the retrieval task of localising the differences: How much of the video does the method need to localise? Are there any temporal biases regarding the timestamps from the videos? Additionally, under the closed set task, more statistics over the number of As, Bs, and Cs that have been annotated and included for each action would be interesting to see. Other statistics that feel like they are missing are the average length of each video (potentially broken down per category) as well as the total number of hours within the dataset.\n7. As a thought, has an experiment where the same video is duplicated and provided into the methods, would the output predictions (esp. for the closed set task) give a 50% response rate? Ideally, this is where a method could predict the difference is negligible also." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The new task of Video Action Differencing is an interesting new task for video understanding, forcing models to recognise and understand fine-grained differences between two very similar videos.\n* The collected dataset combines four datasets with 5 different categories of video, providing a varied test bed for this new task.\n* The proposed method performs well on the dataset, outperforming off the shelf LMMs on the task yet still showcase that there is a lot still to work on in this area for future work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a new task called video action differencing is proposed which aims for models to be able to understand fine-grained differences between multiple videos of people performing the same action. A new benchmark dataset is collected, named VidDiffBench, which includes 5 categories from 4 different existing datasets. Annotations are collected from pairs of videos with statements given per pair of video based on the action (for example video A includes someone jumping higher than Video B for a lay-up shot). There are two main evaluation protocols for this task, a closed set setting, in which the model must predict A or B for each possible description, and a closed set setting in which the method must generate the description. A new method which combines stages named VidDiff is proposed which outperforms standard LMMs on the dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Weaknesses\n\n* There are some missing references for skill determination within the related work [a, b, c, d] as another example of fine-grained differences between videos containing the same action.\n* Line 196: It is mentioned here in the text that *\"Video pairs are randomly sampled within each dataset to ensure a wide range of comparison difficulty, from simple actions to more advanced tasks requiring fine-grained understanding\"* This implies that videos of differing actions are compared against one another. \n* Section 3.3.2: There are some missing information about the annotators, regarding skill level, total number, renumeration etc.\n* For the closed set, a binary classification setup was used as all candidate difference statements which is mentioned to be unbiased on Line 298. However, has this been checked? If videos are not randomly swapped at inference/training time there could have been a bias towards one video or another.\n* The open set evaluation seems like it could be prone to some errors/inconsistencies depending on the LLM chosen and how much it could hallucinate/not understand the task and doesn't represent a potentially sound evaluation protocol.\n* It is not clear within the paper as to why an LLM was used to choose the easy/medium/hard splits for each of the actions.\n* This paper did not feel like an easy read, whilst the grammar/sentence clarity was good. There was a lot of information that is split across the main paper and the appendix which necessitates jumping between them. The structure of the paper could also be improved, the method details occur within the experiments yet are given as a main contribution within the introduction with only a small amount of space given to explain the model. Another major factor for this is that details of the dataset are given before the task is formally defined, which given this is a new task, makes it harder to read than it should be.\n\n# Additional Comments\nLine 158 is referring to the wrong table, this should be Table 1\nLine 1040 (in supp.) vary -> very\nSection D.1 in the appendix is empty, maybe D.2 is meant to be a subheading of D.1?\nFor results tables, it would be good to include a random performance row.\n\n# References\n[a] Doughty, Hazel, Dima Damen, and Walterio Mayol-Cuevas. \"Who's better? who's best? pairwise deep ranking for skill determination.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\n\n[b] Doughty, Hazel, Walterio Mayol-Cuevas, and Dima Damen. \"The pros and cons: Rank-aware temporal attention for skill determination in long videos.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.\n\n[c] Pan, Jia-Hui, Jibin Gao, and Wei-Shi Zheng. \"Adaptive action assessment.\" IEEE Transactions on Pattern Analysis and Machine Intelligence 44.12 (2021): 8779-8795.\n\n[d] Zhang, Shao-Jie, et al. \"Adaptive stage-aware assessment skill transfer for skill determination.\" IEEE Transactions on Multimedia (2023)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses, plus the following:\n\n- Which LLMs/VLMs are used for the *Difference Proposer* and *Action Differencer*?\n- How does the benchmark handle cases of inverse correlation? For example, would *lower squat in video A* be equivalent to *higher squat in video B*?\n- Since the videos are not curated, factors such as different camera angles, varying FPS, or differences in the actor's height could introduce biases in the annotations and results. How do the authors address these potential biases?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "### Originality\n- **A novel task**: This paper introduces the new task of video action differencing with natural language. While related tasks, such as difference captioning, have been explored to provide a coarse comparison between videos, no prior work has tackled video action differencing in the same way—focusing on fine-grained differences described in natural language.\n- **A challenging benchmark**: The proposed benchmark, VidDiffBench, is comprehensive, covering five categories of instructional videos. It has proven to be highly challenging, even for top-performing closed-source vision-language models (VLMs).\n- **An agent-based system**: The paper presents an agent-based system that decomposes the task, achieving better performance than existing VLMs.\n### Clarity\nThe flow of ideas is straightforward, making the paper easy to follow and understand.\n\n### Significance\nThe paper convincingly demonstrates the importance of video action differencing, and the introduction of the new benchmark is likely to inspire further research in this area." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the first large-scale video action differencing dataset, presenting a novel task of identifying differences between videos depicting the same action. The authors compile over 500 video pairs from existing datasets across five categories: Fitness, Ball Sports, Diving, Music, and Surgery. These videos are then assigned to annotators along with 147 distinct descriptions. Annotators must indicate which video (A or B) most closely aligns with each description. For example, given two videos of different actors performing a squat, a description might read \"deeper squat,\" and the annotator would select A or B based on which video demonstrates the deeper squat. To ensure dataset quality, 25% of the initial annotations undergo re-annotation, revealing a very low discrepancy rate. The dataset also includes action localization (pinpointing where the action occurs in the video) and specific key points for each action (e.g., when knees start to bend).\n\nThe authors also develop an agentic model called VidDiff to address the action differencing challenge. VidDiff employs several Large Language Models (LLMs) and Vision Language Models (VLMs) as agents to solve specific aspects of the problem: proposing potential differences based on the action description, localizing frames where such actions might occur, and finally specifying which video (A or B) corresponds to the observed difference. VidDiff outperforms other zero-shot VLMs in this task.\n\nLastly, the authors provide ablation experiments that highlight the challenges presented by their new benchmark." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### Unproven claims\n- In the introduction, the authors claim they will address the challenges of *precise temporal alignment and the need for fine-grained understanding of action dynamics*. However, it remains unclear how they specifically solve the issue of temporal alignment. Could you elaborate on how you solve this issue or point us to the location where it is addressed?\n\n### Benchmark and results\n- Similar datasets are presented in the related work section; however, since this work is primarily a benchmark paper, more comparisons with existing benchmarks would be make the differences clearer (e.g., similar to Table 1 but with other datasets in the first column). Consider adding what is unique about each dataset and how the current dataset differs.\n- As a benchmark paper, we would expect more results from other open-source VLMs (especially those addressing video data such as LLaVA-video) to better understand their limitations and make it easier for other researchers to work with this benchmark. \n\n### Clarity\n- 557 or 656 video pairs? In the abstract, the authors state that the dataset contains *557 video pairs... 4,719 fine-grained action differences* (line 013-014), but on line 260, they mention *656 video pairs, 5,580 annotated differences*. Clarification needed on which is correct. \n- Figure 1: The distinction between the first and second row is unclear, yet the caption claims these represent two different challenges. These two challenges are not discussed elsewhere in the paper and don't seem to be related to the dataset splits. Please clarify this." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A new task and benchmark for comparing how an action is performed between two videos, with a zero-shot method" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024video,\ntitle={Video Action Differencing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3bcN6xlO6f},\nnote={under review}\n}" }, "abstract": { "value": "How do two individuals differ when performing the same action? In this work, we introduce Video Action Differencing, the novel task of identifying subtle differences between videos of the same action, which has numerous applications, such as coaching and skill acquisition. To enable development on this new task, we first create VidDiffBench, a benchmark dataset containing 557 video pairs, with human annotations of 4,719 fine-grained action differences and 2,075 timestamps indicating where these differences occur. Our experiments demonstrate that VidDiffBench poses a significant challenge for state-of-the-art large multimodal models (LMMs), such as GPT-4o, Gemini 1.5 Pro, and Qwen2-VL. By analyzing the failure cases of LMMs on VidDiffBench, we highlight two key challenges for this task: frame-by-frame alignment and fine-grained frame comparison. To overcome these, we propose VidDiff, an agent-based system that breaks the task into three stages: action difference proposal, keyframe localization, and difference verification, each stage utilizing specialized foundation models. The VidDiff method outperforms these baseline LMMs. We release both the dataset and code to encourage and support future research in this domain." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Video", "Actions", "Differencing", "Zero-shot", "benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9ef4acd4c5de63ad17d8b82c81ee3b0c46cb5ffa.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/d4363054841c9ff375be851451deab36c7f3a3b9.pdf" }, "title": { "value": "Video Action Differencing" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3c4zQpIFNK
LIME: LESS IS MORE FOR MLLM EVALUATION
main
Active
Multimodal Language Models;Multimodal Benchmark
datasets and benchmarks
5;5;6;8
5;4;3;4
3;3;3;3
3;2;3;3
3;3;3;3
6
4
3
2.75
3
-0.288675
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How sensitive is the filtering pipeline to the choice of judge models? Would using different combinations of models as judges result in significantly different benchmark compositions?\n2. How do you ensure that the filtering process doesn't inadvertently favor certain types of model architectures or training approaches?\n3. Have you explored whether the reduced dataset size affects the statistical significance of model comparisons? What is the minimum number of samples needed for reliable evaluation?\n4. (Minor) If the benchmark is accepted, what will the authors do to let the community buy the idea using your combined filtered benchmark instead of the existing ones? While I believe the benchmark is useful. One concern from my side is that people may still stick to the individual raw datasets." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Originality: \n * Novel approach to benchmark curation that focuses on quality over quantity. \n * Creative use of MLLMs themselves as judges for data filtering. \n * Innovative three-stage filtering pipeline (model judgment, semi-automated screening, leakage elimination)\n* Clarity:\n * Well-structured presentation of the methodology \n * Clear visualization of data statistics and filtering results\n* Quality:\n * Comprehensive empirical validation across multiple models and benchmarks\n * Thorough analysis of the correlation between different subtasks" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces LIME (Less Is More for MLLM Evaluation), a refined benchmark for evaluating Multimodal Large Language Models (MLLMs). The authors propose a semi-automated pipeline to curate a more efficient and discriminative evaluation dataset by filtering out uninformative samples and eliminating answer leakage. The resulting benchmark reduces the number of evaluation samples by 76% and evaluation time by 77% while maintaining or improving the ability to distinguish between different models' capabilities. Key findings include the inadequacy of traditional automatic metrics for captioning tasks and the importance of excluding caption task scores for more accurate overall performance assessment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The filtering pipeline heavily relies on existing MLLMs as judges, which could potentially introduce biases from these models into the benchmark. While the authors attempt to mitigate this by using multiple models, a more thorough analysis of potential inherited biases would strengthen the work.\n- The paper does not fully explore whether the reduced dataset size might affect the statistical significance of evaluation results. While efficiency gains are clear, more discussion of the tradeoffs between dataset size and evaluation reliability would be valuable\n- The choice of tasks and task weightings in the final benchmark appears somewhat arbitrary. A more systematic approach to determining which tasks are most important for evaluating MLLMs would strengthen the methodology." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The problem is important and interesting to the community. Evaluation is an important part for multimodal LLM. This work dives deep into existing benchmarks and conducts comprehensive analysis to study the specific questions in those benchmarks. The motivation of Figure 1 and 2 is clear and important." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Existing MLLM benchmarks often include overly simple or uninformative samples, making it difficult to effectively distinguish the performance of different MLLMs. This work proposes LIME , a refined and efficient benchmark curated using a semi-automated pipeline.\nThis pipeline filters out uninformative samples and eliminates answer leakage by focusing on tasks that require image-based understanding. The experiments show that LIME reduces the number of samples by 76% and evaluation time by 77%, while it seems promising to be more effective for distinguishing different models’ abilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My biggest concern is that the approach only filter the samples from the existing benchmarks, do we need to consider adding other metrics/domains to evaluate MLLMs? \n2. Another interesting thing is that sometimes MLLM may not \"read\" image but directly answer the questions based on the knowledge from LLM, do we need to consider adding this into the benchmark?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Line 036, the author mentions 'How many chairs in the image'. Does it mean all existing MLLMs' counting capabilities are not satisfactory?\n2. In Line 156, 'The model has encountered a specific question during training'. Does the term 'model' here refer to LLM or MLLMs?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper uncovers the problem of existing benchmarks and the proposed filter method is reasonable and meaningful. \n2. The filter benchmark provides a more rigorous evaluation of the existing MLLMs and will have practical significance for future MLLM evaluations.\n3. The experiment results are comprehensive and insightful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the LIME, a refined and efficient benchmark for MLLM evaluation. The paper first shows that existing benchmarks contain a large proportion of easy or noise samples that cannot reflect the actual capabilities of MLLM. Then, the paper proposes a three-stage pipeline to filter the existing 10 benchmarks across 6 types. The easy samples, wrong-labeled samples, and answer-leakage samples are removed during this process. The refined benchmark can provide a more rigorous evaluation of the existing MLLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Do not compare with other general MLLM benchmarks like MMMU or MMBench. I would also like to see whether the easy samples or answer-leakage samples exist in these benchmarks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The removal of easy samples and meaningless or erroneous data from the dataset is crucial for more efficient and reasonable evaluation of MLLMs. The authors utilize GPT-4V and human annotators to fillter out those illogical and meaningless questions, which seems to have been overlooked in previous benchmarks.\n2. The authors evaluate over 30 baseline models and provide an analysis of MLLM performance based on the evaluation results, which clearly represents a significant amount of work.\n3. The authors also construct a similarity search system for investigating the gap between LIME and real-world users’ queries, which shows that the current benchmark does not fully cover the instruction requirements of real-world scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a refined and efficient MLLM benchmark called LIME, which enhances the quality of existing benchmarks through semi-automatic refinement. LIME consists of 9,400 evaluation samples across six types of tasks and ten different benchmark datasets. The authors evaluated over 30 models and provided some analyses based on the results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed benchmark, LME, integrates existing benchmarks and adopts their evaluation metrics, which have been previously criticized in earlier works (specifically designed for evaluating MLLMs) [1, 2] as being unsuitable for assessing open-form outputs of MLLMs. For instance, the authors mention, “for tasks such as AI2D, ScienceQA, OCRBench, and POPE, we calculate the accuracy of the extracted responses.” In these benchmarks, if the correct answer is \"bike\" but the model outputs \"bicycle,\" it is considered incorrect, which is an unreasonable approach. The authors should employ more appropriate evaluation metrics, such as multiple-choice questions, true/false questions, or scoring by GPT.\n2. To eliminate answer leakage, such as when a model has seen the questions during training, the authors conduct a text-only check using pure text LLMs. Based on the responses from these LLMs, they remove samples that can be directly answered without using the image. However, this approach is unreasonable because these multimodal questions would only appear in the training of MLLMs. Therefore, the authors should use MLLMs to filter out such questions instead of relying on LLMs.\n\n[1] SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension \n[2] MMBench: Is Your Multi-modal Model an All-around Player?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024lime,\ntitle={{LIME}: {LESS} {IS} {MORE} {FOR} {MLLM} {EVALUATION}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3c4zQpIFNK},\nnote={under review}\n}" }, "abstract": { "value": "Multimodal Large Language Models (MLLMs) are measured on numerous benchmarks like image captioning, visual question answer, and reasoning. However, these benchmarks often include overly simple or uninformative samples, making it difficult to effectively distinguish the performance of different MLLMs. Additionally, evaluating models across many benchmarks creates a significant computational burden. To address these issues, we propose LIME (Less Is More for MLLM Evaluation), a refined and efficient benchmark curated using a semi-automated pipeline. This pipeline filters out uninformative samples and eliminates answer leakage by focusing on tasks that require image-based understanding. Our experiments show that LIME reduces the number of samples by 76% and evaluation time by 77%, while it can more effectively distinguish different models' abilities. Notably, we find that traditional automatic metrics like CIDEr are insufficient for evaluating MLLMs’ captioning performance, and excluding the caption task score yields a more accurate reflection of overall model performance. All code and data are available at https://anonymous.4open.science/r/LIME-49CD." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Language Models", "Multimodal Benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/dfffbcddc9cec825d53a332c1246c165aa2b8ada.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "LIME: LESS IS MORE FOR MLLM EVALUATION" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3cgMU3TyyE
Broaden your SCOPE! Efficient Conversation Planning for LLMs with Semantic Space
main
Active
Conversation Planning;Tree search for LLM
foundation or frontier models, including LLMs
6;8;8
4;4;4
2;2;3
3;3;4
3;3;2
7.333333
4
2.333333
3.333333
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How does SCOPE handle the potential bias introduced by the semantic embedding model?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The main innovations of this paper:\nIntroduces the concept of representing conversations in a dense semantic space, which captures the semantics of natural language conversations effectively. This representation allows for the modeling of stochastic transitions within a conversation and their associated rewards. Compare with the language or token space, this method helps to achieve a significant improvement in planning speed and improves the diversity of LLM samples." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SCOPE, a novel approach for efficient conversation planning with LLMs. It leverages dense semantic representations to model stochastic transitions and rewards within conversations, enabling faster planning without additional LLM queries. SCOPE achieves higher cumulative rewards compared to conventional simulation-based planning algorithms, demonstrating its effectiveness in real-time conversations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper relies on a specific dataset (1msys/1msys-chat-1m) for training the transition models. It would be beneficial to demonstrate the generalizability of SCOPE by testing it on additional datasets or in different conversational contexts." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As mentioned above if the authors can address my concern regarding evaluation and the choice of reward functions.\n\nAnother question I have is do you think the transition model you trained on those datasets will generalize to other unseen domains? Has this been looked at?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The goal of this paper is well motivated. Working towards a more efficient conversation planning method can help with customer experience since latency will decrease and it seems the proposed method is novel. I think this will further encourage future work in this area.\n\n2) The paper is well-written and easy to follow. I appreciate diagrams such as Figure 8 which helped visualize their overall Algorithm. Additionally the explanation of their method is also clear and easy to follow. In addition to giving good details on their experimentation the authors also released their code which will make it useful for the community to reproduce and build off of." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a method called SCOPE (Semantic space COnversation Planning with improved Efficiency) which focuses on making conversation planning in LLMs more efficient. There is a need to look ahead at every point in the conversation to see if choosing a particular response will lead to a better conversation in the long run; however, the authors mention that current methods that use vanilla MCTS are time-consuming because an LLM will need to be queried multiple times to get all possible scenarios. \n\nTherefore the authors propose a method that doesn't involve querying an LLM when determining future states but rather leverage the semantic space for more efficient searching. More specifically SCOPE involves 1) training a Transition model that samples a state where a state is a conversation context ending in a Human Turn and 2) training a reward model that predicts the reward at a certain state. The reward is the number of tokens in the user output and harmlessness which is predicted from Llama-Guard 2. To project the conversation and response into a semantic space the authors use the feature layer of Llama Guard 2 as the semantic embedding.\n\nThe authors then compare their SCOPE method against a variety of baselines which include: not doing any conversation planning, doing conversation planning for only one step, vanilla MCTS (which is time-consuming) and selecting a random response. They evaluate by measuring the cumulative reward and find that SCOPE outperforms all these methods and is much more efficient than vanilla MCTS. Both the training and testing were done on the Lmsys-chat-1m and Daily Dialog datasets.\n\nThey ran ablation studies to find what is the best type of model architecture to use for their Transition model and how many turns is good enough to plan ahead." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My biggest concern is around the evaluation of this method along with the reward model. \n\nRegarding the reward model: I think that the harmlessness metric makes sense and the use of Llama-Guard2 is a good decision. However for engagement I don't think just measuring the token length of the user response is enough. Yes that is definitely a fine proxy to have but I don't think it is enough and I don't think \"greater commercial benefits\" is a good enough motivation. For one thing if this method was say used in spoken conversations then token length wouldn't be a good enough metric. One idea is to perhaps measure how often is the user asking questions to show that they are engaged in the conversation. \n\nRegarding evaluation: Overall the authors look at maximizing the cumulative reward to determine what is the best method in this case which is a good setup but I would think having some human evaluation could help solidify their arguments unless they disagree in which case I'm happy to hear why." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Questions:\n\n- In experiments you used $\\lambda=0.1$ for UCT, which forces the tree search to focus on exploitation instead of exploration. This is rather an uncommon value. Is there a reason for this?\n- Can you provide more details about the benchmarks you tested? Currently its only mentioned in L363-365 as \"dialogue datasets consisting of open conversations and dialogue between LLM and humans\". Are these generic dialogues from existing chat datasets or are these curated from certain dialogue planning benchmarks?\n\nComments and Typos:\n\n- Planning in semantic/latent space (L108-111) has been explored in some prior work [1-2]. These should be mentioned in this paper as related work.\n- In L259 and L346, it should be \"conversation states s\" instead of \"conversation starter s\"\n- Currently Introduction and Background/Related work takes up more than 4 pages. This is too long, as it leaves little room for methods and experiments. I would suggest the authors to trim Section 1-3 as much as possible (e.g., details about MCTS can be moved to appendix).\n- \"Section 6.5 Conclusion\" should be \"Section 7 Conclusion\".\n- If I understood correctly, \"0-step greedy\" directly chooses the best response according to the reward model? If so, this should be named \"rejection sampling\" instead, which is a common approach used in many RL related work.\n\n\n---\n\nReferences\n\n[1] Lubis, Nurul et al. “LAVA: Latent Action Spaces via Variational Auto-encoding for Dialogue Policy Optimization.” ArXiv abs/2011.09378 (2020): n. pag.\n\n[2] Vlastelica, Marin et al. “Taming Continuous Posteriors for Latent Variational Dialogue Policies.” AAAI Conference on Artificial Intelligence (2022)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Conducting MCTS in semantic space by modeling the transitions/reward functions in semantic space is novel. As the author mentioned, such an approach significantly reduces the search time from MCTS while retaining a high performance (if the transition/reward can be learned well)\n\n- The authors supported many subtle claims with empirical evidences/theoretical analysis (in Appendix). For example, Appendix A.7 provides additional details to verify the effectively of using probabilistic models for stochastic transitions, and Appendix A.2 presents theoretical justifications for the optimal solution in semantic space, and more. This indicates that the proposed method/problem has been well thought and studied.\n\n- The authors evaluated their approach against popular methods such as MCTS, and showed improvement in performance despite using much less test-time compute." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on speeding up MCTS in semantic space to improve dialogue policy planning. The author proposes SCOPE, a method to convert dialogue into transition/reward functions in semantic space (i.e., embedding space), and then conduct MCTS to determine the optimal response. Specifically, SCOPE obtains the transition function by 1) convert dialogues into embeddings using LLaMA-2 Guard, and then 2) train a model to model state transition using an existing conversation dataset. Then, SCOPE obtains the reward function by similarly training a model to predict the reward associated with each state in the semantic space. Finally, the author evaluated SCOPE against methods such as rejection sampling (i.e., 0-step greedy) and MCTS, and show that SCOPE can achieve superior performance with significantly less compute." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the authors argue that \"our paper focuses on the training-free setting and explores inference time strategies\" (L59), SCOPE is not training free, as it requires training the transition and reward model before test time. This makes direct comparison (e.g., performance v.s. speed) to prompt-based MCTS unfair, as the latter strictly uses no training.\n\n2. This work trains a transition function to predict $T(s) \\to (a',s')$ instead of $T(s, a') \\to s'$, based on description in L287-293. This means that this transition function needs to *predict both the response that will be generated by the LLM and next the corresponding user response*. This seems unrealistic because 1) if it can accurately model $a'$ then it essentially becomes an LLM, and 2) the planning process becomes *policy agnostic* (also see Algorithm 1 line 7) - a sign indicating that SCOPE may not be robust against using different LLMs as policy models (unlike prompt based MCTS).\n\n3. Since SCOPE requires a trained transition and reward function in latent space, it becomes questionable whether SCOPE can generalize when *evaluation dialogues become OOD compared to the ones used to train the transition/reward function*; or when different LLMs is used to propose candidates at test time.\n\n4. Since SCOPE planning is conducted in latent semantic space, there is a lack of transparency/explanability in its decision making process. This is in contrast to approaches that plans in text space (e.g., prompt based MCTS). This could present difficulties to researchers or users to understand how or why certain actions were chosen." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Conversation planning typically uses many LLM queries for look-ahead simulation to select responses that maximize long-term rewards. By learning transition and reward models in text semantic space, we conversation plan without needing LLM queries." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024broaden,\ntitle={Broaden your {SCOPE}! Efficient Conversation Planning for {LLM}s with Semantic Space},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3cgMU3TyyE},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) are used in chatbots or AI assistants to hold conversations with a human user. In such applications, the quality (e.g., user engagement, safety) of a conversation is important and can only be exactly known at the end of the conversation. To maximize its expected quality, conversation planning reasons about the stochastic transitions within a conversation to select the optimal LLM response at each turn. Existing simulation-based conversation planning algorithms typically select the optimal response by simulating future conversations with a large number of LLM queries at every turn. However, this process is extremely time-consuming and hence impractical for real-time conversations. This paper presents a novel approach called Semantic space COnversation Planning with improved Efficiency (SCOPE) that exploits the dense semantic representation of conversations to perform conversation planning efficiently. In particular, SCOPE models the stochastic transitions in conversation semantics and their associated rewards to plan entirely within the semantic space. This gives the advantage of allowing the optimal LLM response to be selected at every conversation turn without needing additional LLM queries for simulation. As a result, SCOPE can perform conversation planning 70 times faster than conventional simulation-based planning algorithms when applied to a wide variety of conversation starters and two reward functions seen in the real world, yet achieving a higher reward within a practical planning budget." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Conversation Planning", "Tree search for LLM" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/62c1178f74628ea22d8144ccb4c32b62cad48b93.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Broaden your SCOPE! Efficient Conversation Planning for LLMs with Semantic Space" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3cnXu5iIP5
Diss-l-ECT: Dissecting Graph Data with local Euler Characteristic Transforms
main
Active
topology;geometry;topological data analysis;graph learning;node classification;spatial alignment;interpretable graph learning
learning on graphs and other geometries & topologies
3;5;6;8
4;5;4;3
1;3;3;3
2;2;3;3
2;4;3;3
5.5
4
2.5
2.5
3
-0.588348
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Would it be possible to include experimental results for the other datasets offered in “A critical look at the evaluation of GNNs under heterophily: Are we really making progress?” by Platonov et. al. or an argument as to why this is done?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents the Local Euler Characteristic Transform (L-ECT) as an extension of the traditional Euler Characteristic Transform, enabling a lossless representation of local graph structures and addressing key limitations of Graph Neural Networks (GNNs) such as oversmoothing and loss of local detail in high heterophily graphs. This novel transformation preserves intricate topological information, allowing for more nuanced node representations by capturing both structural and spatial data and offering an alternative to GNN message-passing frameworks. Additionally, the authors introduce a rotation-invariant metric that enables robust spatial alignment of data in Euclidean space, enhancing the method’s applicability in graph-structured data and increasing resilience to coordinate transformations. Empirical results underscore L-ECT’s effectiveness, showing superior performance over standard GNNs in high-heterophily datasets like WebKB, Roman Empire, and Amazon Ratings. Furthermore, L-ECT’s model-agnostic nature facilitates integration with interpretable machine learning models, such as XGBoost, making it ideal for use in regulated fields like healthcare and finance where transparency is paramount. Beyond graph representation, L-ECT extends to point clouds and other high-dimensional data, proving robust to noise and outliers and enabling efficient spatial alignment without the need for exhaustive pairwise distance computations.\n\nThe methods section is detailed yet readable, presenting L-ECT’s mathematical foundation and integrating a rotation-invariant metric for spatial alignment, which adds to the paper’s originality. While the experiments section is robust and results are well-presented through tables and figures, additional visual aids could further clarify data characteristics and enhance accessibility.\n\nthe discussion on the limitations of the approaches proposed in the paper is appreciated" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the Local Euler Characteristic Transform (L-ECT), an extension of the Euler Characteristic Transform (ECT) designed for graph representation learning. Unlike traditional Graph Neural Networks (GNNs), which can obscure local details through node aggregation, the L-ECT maintains local structural data, thus enhancing interpretability and performance, especially in heterogeneous (high heterophily) graphs. By capturing spatial and structural characteristics of local neighborhoods, the L-ECT provides a rotation-invariant metric for data alignment, showcasing improved performance over GNNs in node classification tasks. The method’s compatibility with machine learning models enables use cases beyond standard GNN architectures, offering more accessible and interpretable models, such as tree-based classifiers. Empirical results demonstrate that L-ECT outperforms GNNs in heterogeneous datasets and facilitates robust spatial alignment in both synthetic and high-dimensional data. This research suggests future exploration into scaling L-ECT and integrating global and local information in complex graph structures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper would benefit, both in making more persuasive the novelty of the work with respect to contemporary literature as well as clarity of the work itself, with a more robust background and related works section \n\nIncluding a more robust and explicit comparison to related works, which also addresses the novelty of the work being proposed, would be appreciated.\n\nThe L-ECT approach, while innovative, faces several limitations and lacks certain aspects of novelty. Its computational complexity scales with graph size and density, making it less efficient for very large or dense graphs and primarily feasible for medium-sized datasets. Although L-ECT emphasizes local information preservation, similar topology-aware or geometric GNN approaches also capture neighborhood-specific details, reducing the uniqueness of this feature. Additionally, traditional GNNs perform comparably well on low-heterophily datasets, indicating that L-ECT may not consistently outperform them across all types of graph data. The approach’s scalability is further limited by sampling trade-offs, as its accuracy depends on carefully chosen parameters, such as direction and filtration steps, which challenge fidelity and computational efficiency at scale. Moreover, despite its model-agnostic design, L-ECT’s interpretability hinges on pre-defined features, potentially restricting its flexibility for complex, dynamic graphs. Finally, L-ECT does not support end-to-end learning as GNNs do; instead, it relies on external classifiers (e.g., XGBoost), which may limit its integration into more comprehensive, end-to-end pipelines.\n\nThe authors should include comparison other works which construct topological representations of graphs and graphs neighborhoods and include reference to those related methods such as “graph filtration learning” by Hofer et. al. and other approaches as discussed in survey works such as “A Survey of Topological Machine Learning Methods” by Hensel et. al.\n\nThe authors provide comparative experimental analysis to a number of datasets. It may be misleading, however, to not include other models as discussed in “A critical look at the evaluation of GNNs under heterophily: Are we really making progress?” by Platonov et. al." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Innovative Use of Euler Characteristic Transform: Employing the ECT to enhance graph representation learning, especially in settings with heterophily, is a novel and interesting approach.\n\n2. Solid Theoretical Foundation: The work is thorough, with strong theoretical results that effectively support the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the Local Euler Characteristic Transform ($l$-ECT), an extension of the Euler Characteristic Transform (ECT) designed to enhance expressivity and interpretability in graph representation learning. It provides a lossless representation of local neighborhoods and addresses key limitations in GNNs by preserving nuanced local structures while maintaining global interpretability. Their method demonstrates superior performance over standard GNNs on node classification tasks, particularly in graphs with heterophily." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Missing Important Related Works & Limited Experimental Comparisons: The quantitative experiments focus mainly on node classification tasks in heterophilic graphs but compare the proposed method only with basic models like GCN and GAT. While the authors acknowledge related works on GNNs designed for heterophily in Section 3, the coverage is still limited. It is suggested that the authors include more related works such as [1-5] and select appropriate GNNs for experimental comparison to strengthen the validation of their method.\n\n[1] Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs\n\n[2] Graph Neural Networks with Heterophily\n\n[3] Predicting Global Label Relationship Matrix for Graph Neural Networks under Heterophily\n\n[4] ES-GNN: Generalizing Graph Neural Networks Beyond Homophily With Edge Splitting\n\n[5] GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See weaknesess." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Novel l-ECT Framework**: Extending the Euler Characteristic Transform to capture local graph details in embedded simplicial complexes is impactful, with theoretical insights enhancing its expressivity, especially for featured graphs.\n\n2. **Extracting Key Information from Node Neighborhoods from Attribute Space**: The l-ECT enables to obtain node neighborhood information by effectively utilizing the information from attribute space.\n\n3. **Experimental Validation**: The l-ECT consistently outperforms traditional GNNs in node classification tasks, particularly in high-heterophily settings, highlighting its interpretability and effectiveness.\n\n4. **Presentation:** The presentation is very good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a new topological feature extraction methods, Local Euler Characteristic Transform (l-ECT), extending the Euler Characteristic Transform (ECT) to provide a lossless, interpretable representation of local graph neighborhoods, addressing limitations in traditional Graph Neural Networks (GNNs). This novel approach improves performance in node classification tasks, especially in heterophilous graphs, by preserving both local and global structural details." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Applicability:** The proposed approach is constrained to graphs with node feature vectors in $\\mathbb{R}^n$, limiting its applicability to datasets that fit this specific structure.\n\n2. **Effectiveness of Approach:** While the concept of embedding the graph into an attribute space using node attribute vectors is promising, the subsequent steps for extracting meaningful information appear less effective. The method could be enhanced by exploring simpler and more efficient ways to utilize the geometry (rather than topology) of ego networks induced within the attribute space.\n\n3. **Feasibility in High Dimensions:** As the dimension $n$ of the feature space increases, the number $m$ of representative vectors on $S^{n-1}$ must grow nearly exponentially. Furthermore, the feature vector range impacts the number of intervals {$t_i$} needed. For high-dimensional and wide-range data, this results in a very high-dimensional $l$-ECT vector, making the approach impractical for real-world applications. Dimension reduction could help by reducing feature dimensionality to three (as two dimensions may be insufficient for graph embedding) and normalizing feature vectors (e.g. total diameter of feature vector space to 2), allowing for \"end-to-end\" a fixed-size feature extraction for nodes. Without this, selecting vectors and thresholds can be challenging, particularly for new users.\n\n4. **Theoretical Contributions vs. Practical Applications:** While the rotation-invariant metric is mathematically appealing, it may lack practical relevance since it relies on the infimum over all rotations. Also, the discussion of graph isomorphism seems tangential, as Definition 2 is highly restrictive, applicable only to isomorphic graphs with identical feature vectors.\n\n5. **Experimental Results:** The presented results are uninformative and potentially misleading. The models used, GCN and GAT, are older and are known to perform poorly in heterophilic settings. The authors should consider comparing their approach with newer GNN models that perform well on heterophilic datasets and include more homophilic datasets (other than Computers and Photo) to provide a comprehensive performance assessment. Also, exploring the integration of $l$-ECT vectors with a more recent GNN model may yield interesting insights into performance enhancement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Using local Euler characteristic transform for graph representation is novel to me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a local Euler characteristic transform for enhancing feature representation for graph learning. This approach addresses key limitations in GNNs by preserving nuanced local structures while maintaining global interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. To compute ECT or l-ECT, one needs to embed a simplicial complex in a Euclidean space. The authors propose to embed using node features. However, I don't think this is a genuine embedding. For example, if the feature space is \\mathbb{R}^2, then even if the nodes are embedded to the place in a 1-1 fashion, the edges may cross each other. Therefore, only talking about vertex embedding is insufficient as a graph or a simplicial complex has additional structures. \n2. Related to 1. The author should be more specific on ``embedding'', whether it is metrical embedding, differential embedding, or topological embedding (or something else?). \n3. The proofs are poorly written. The statements are vague and imprecise. Many details are missing. It is hard to assess the correctness of the results. \n4. It seems to me that the proposed l-ECTs capture local structural information. They are used as node features, but not used to guide feature aggregation. I fail to get the intuition of why they can be useful for the node classification task. However, on the other hand, they might be useful for the graph classification task. \n5. The compared benchmarks are very limited (only GCN and GAT). From my own experience, the results are not very impressive, e.g., for the Actor, Squirrel, and Chameleon datasets, there are more recent benchmarks (e.g., ACM-GCN) whose performance is at least 5%-10% higher than those reported by the authors. \n6. Ablation study is missing. It is hard to assess whether l-ECTs play an important role in the reults shown." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present local Euler Characteristic Transforms and show its expressivity for interpretable node classification." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024disslect,\ntitle={Diss-l-{ECT}: Dissecting Graph Data with local Euler Characteristic Transforms},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3cnXu5iIP5},\nnote={under review}\n}" }, "abstract": { "value": "The Euler Characteristic Transform (ECT) is an efficiently-computable\n geometrical-topological invariant that characterizes the global shape of data. \n In this paper, we introduce the Local Euler Characteristic Transform (l-ECT), a novel extension of the ECT particularly designed to enhance expressivity and interpretability in graph representation learning.\n Unlike traditional Graph Neural Networks (GNNs), which may lose critical local details through aggregation, the l-ECT provides a lossless representation of local neighborhoods.\n This approach addresses key limitations in GNNs by preserving nuanced local structures while maintaining global interpretability.\n Moreover, we construct a rotation-invariant metric based on l-ECTs for spatial alignment of data spaces.\n Our method exhibits superior performance than standard GNNs on a variety of node classification tasks, particularly in graphs with high heterophily." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "topology", "geometry", "topological data analysis", "graph learning", "node classification", "spatial alignment", "interpretable graph learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/44ac974cda4248925c52bcf84627f0a0e319b876.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/392755d04e854c289a78625de5a6770e649e5e70.zip" }, "title": { "value": "Diss-l-ECT: Dissecting Graph Data with local Euler Characteristic Transforms" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3cvwO5DBZn
On Speeding Up Language Model Evaluation
main
Active
large language models;evaluation;matrix factorization
foundation or frontier models, including LLMs
5;5;6;10
4;4;4;3
2;3;3;4
2;3;3;3
2;2;2;4
6.5
3.75
3
2.75
2.5
-0.980196
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "-\tLine 186: by adaptive selecting --> by adaptively selecting\n-\tProof reading is needed to fix typos. \n-\tWhat does a stand for in equation in line 190?\n-\tI suggest some table/mapping to keep track of the different notions used and their meaning." }, "rating": { "value": 10 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "-\tThe proposed algorithms can be used for a variety of evaluation use cases, not limited to LLMs.\n-\tThe paper provides enough and clear description of relevant concepts on which the proposed solution is built.\n-\tThe paper is very well-written.\n-\tThe proposed approaches show great money and time reduction on large evaluation datasets. \n-\tThe approach was evaluated on variety of tasks, setups, methods, and was evaluated using a thoughtful evaluation approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes two active selection algorithms for evaluation based on the classical approach of estimation of the upper confidence bound. The main aim of the proposed algorithms is to identify the best performing method across a set of validation examples, given a fixed evaluation budget. The budget can be monetary cost or GPU time. The methods to evaluate can be different prompts or hyperparameter settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tNothing major to report here" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In the experimental setup, the authors set the rank r = 1or UCB-E-LRF. Although the ablation study shows that r=1 achieves the best performance, this choice appears overly low compared to related research. The authors should provide more evidence to demonstrate that this is not due to sampling error or an inadequately small dataset." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper introduces UCB-E and its variant UCB-E-LRF. \n\nThe authors conducted extensive experiments across multiple datasets and performed repeated random seeds, which enhance the stability of the results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the evaluation problem of large language models (LLMs) and proposes a UCB-based evaluation method that can identify the optimal LLM strategy for a specific task with a lower budget." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Some descriptions in the paper are unclear. For instance, Figure 3, which presents key experimental results, lacks a legend, making it difficult to interpret. \n\n- Additionally, the paper does not clearly define the baseline methods used in the experiments. \n\n- Some results also lack in-depth discussion. For example, Figure 3 shows that UCB-E and UCB-E-LRF perform inconsistently across different datasets. The authors attribute this to varying dataset difficulty; however, when comparing dataset pairs like (GSM8K Models, PIQA Models) and (GSM8K Prompts, PIQA Prompts), the conclusions are contradictory. More detailed explanation and discussion from the authors are needed here." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In the whole paper, only one baseline is described: uniformly sample and evaluate T examples, but three baselines are mentioned later on, and shown in the figures. What are the two other baselines? Can they be given some attention in the paper?\n\nI have multiple small suggestions to improve the clarity and readability of the paper:\n- In Table 2, the H1 value for each dataset is stated. But there is no explanation of what a higher or lower value means in the caption, or anywhere near where the table is cited. I had to refer to the Corollary 1 where it is mentioned, re-read to figure out what a higher or lower value means, to later find an explanation in section 4.4. \n- In Table 2, the columns are ordered: “Dataset Name”, “Size m x n”, “Method Set”. The size is m x n, m stands for methods, n for data samples. I would either swap the “Dataset Name” and “Method Set” columns, or transform the “Size m x n” column to “Size n x m” to have a natural ordering of the columns and the order of the sizes.\n- In Figure 3, there is no legend for the curve colors. In the caption of Figure 4 it is stated that the UCB-E and UBC-E-LRF are blue and red (at least for Figure 4), but there is no mention of the other curves anywhere.\n- Figure 3 has the datasets ordered from highest H to lowest, and it is mentioned in 4.4 (2 pages forward) that they are ordered by hardness. There is no mention that they are ordered from hardest to easiest, and that higher H means harder and lower H means easier. It can be deducted from the whole text, but it is not immediately obvious." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- To the best of my knowledge, this is the first paper to be using the multi-armed bandit for LLM model/setup evaluation.\n- The idea is solid, and useful. Especially, with the ever growing number of models, size of models and knobs that you can tweak to improve the performance for a specific/custom task. This framework can substantially reduce resources when practitioners have to choose a best model for their use-case.\n- The algorithms are clearly outlined, making the understanding and reproduction easy.\n- A big strength is that the experiments are done on multiple datasets, with varying H1. This paints a clear picture of how this framework works in different setups, and which one (UCB-E or UCB-E-LRF) to choose for which setup.\nThe ablations are extensive: ensemble size, uncertainty scaling factor, warm-up budget, rank, etc." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes and extends the well-known multi-armed bandit (UBC) for selection of a best method/setup given a set of method (for example: an LLM, prompt, decoding parameters), a scoring function (for example: exact string matching, BLEU, ROGUE, LLM based annotator) and dataset of examples to evaluate the methods on. This extended multi-arm bandit is referred to as UBC-E. Furthermore, the paper proposes to incorporate a low-rank factorization of the observed scores to enable it to reliably interpolate missing evaluation scores. The low-rank factorization leverages the fact that method-examples are correlated with each other. The whole UBC-E-LRF conserves resources while still guaranteeing confidence that the best method/setup will be chosen.\n\n\nAll this is supported by theoretical proof and discussions, and furthermore shown by empirical experiments on three datasets, and various methods and setups. The UBC-E and UBC-E-LRF are compared with baselines, the top-1 precision and NDCG@10 are used as metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In section 3.2, low-rank factorization: “Intuitively, if the method-examples are vert correlated, there should exist….” while I do agree with the intuition, it would be nice to have a citation here. At least the citation from the appendix: “Chen et al., 2020; Cai et al., 2019”.\n- Even though the information is in the paper, it requires going back and forth to find it. For example, the figure captions are lacing information that is present elsewhere in the text, or not present at all. Some redundancy in the text for the sake of clarity is always welcome. I added suggestions to improve this in the Question section below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(1) The studied task is interesting. \n(2) The proposed method seems to be effective in acclerating the evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an adaptive approach that exploits the fact that few samples can identify superior or inferior settings and many evaluations are correlated. It uses multi-armed bandits to identify the next (method, validation sample)-pair and low-rank matrix factorization to fill in missing evaluations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The paper seems to be recycled and revised from a longer sumbisison, and many parts (especially figures or tables) are with tiny fonts, which are difficult to read.\n\n(2) Some experimental results are difficult to understand, e.g., Table 3.\n\n(3) Overall, I believe evaluation is quite important, and it often involves a number of influencing factors. What if there exists biases in the test datasets? How the comparison results are consistent with human evaluation, since automatic evaluation may not reflect the real capacities of LLMs? \n\n(4) The related work part is weak, which needs more discussions on evaluation of LLMs." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We show that the best language model and/or prompt can be identified with just 5%-15% of the usual computation using our adaptive evaluation algorithms." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024on,\ntitle={On Speeding Up Language Model Evaluation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3cvwO5DBZn},\nnote={under review}\n}" }, "abstract": { "value": "Developing prompt-based methods with Large Language Models (LLMs) requires making numerous decisions, which give rise to a combinatorial search problem over hyper-parameters. This exhaustive evaluation can be time-consuming and costly. In this paper, we propose an \\textit{adaptive} approach to explore this space. We are exploiting the fact that often only few samples are needed to identify clearly superior or inferior settings, and that many evaluation tests are highly correlated. We lean on multi-armed bandits to sequentially identify the next (method, validation sample)-pair to evaluate and utilize low-rank matrix factorization to fill in missing evaluations. We carefully assess the efficacy of our approach on several competitive benchmark problems and show that it can identify the top-performing method using only 5-15% of the typical resources---resulting in 85-95% LLM cost savings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "large language models", "evaluation", "matrix factorization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bc4f75d86bb938b741c1b03d4f329080e70ee7d4.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "On Speeding Up Language Model Evaluation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3d6awrrpUq
Compressed-Language Models for Understanding Compressed File Formats: a JPEG Exploration
main
Active
Compressed File Formats;JPEG;Autoregressive Transformers
applications to computer vision, audio, language, and other modalities
3;3;3;5
5;4;3;4
2;1;2;3
1;1;2;2
3;2;3;2
3.5
4
2
1.5
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Dear authors & reviewers,\n\nThe reviews for the paper should be now visible to both authors and reviewers. The discussion is open until November 26 at 11:59pm AoE.\n\nYour AC" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "authors - reviewers discussion open until November 26 at 11:59pm AoE" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Not Applicable" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors have investigated the effectiveness of CLMs in handling raw byte streams from compressed files. Specifically, they have used the JPEG format as the compression mechanism. They have employed three datasets: MNIST, CIFAR-10, and TinyImagenet. The models used are standard models available in the literature (e.g., a small LLaMA-like model). Their tokenization is somewhat new. In general, the results indicate that CLMs are good in dealing with compressed data.\n\nFor instance, the accuracy they obtain for file recognition are: 99% on MIST and 74% on CIFAR. The model seems to be very effective in anomalies detection and files generation. For example, in the context of MNIST, 99% of the files generated are valid JPEG files." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The main goal of this project is to study the effectiveness of Compressed-Language Models (CLMs) in understanding raw byte streams from compressed file formats (CFFs). Specifically, they have used JPEG data in this study and evaluated the performance of CLMs on three functions: identifying inherent properties of compressed files, discovery of anomalies in compressed files, and generation of compressed files." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The novelty of the work is modest." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why there is only single-byte replacement anomalies considered when simulating anomalous files, would other types of anomalies have a more significant impact on model performance?\n\n2. It seems the results of Section 4.2 are not presented in any table or figure in this section. does the term “MNIST’s validation set” refer to the validation set used during training? If so, what are the results on the test set? Could you clarify how the dataset was actually split?\n\n3. What’s the detail of fine-tune the models for recognizing the semantic class.? What was the data used for fine-tuning like?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The research topic is novel as it explores the understanding capabilities of language models on compressed file formats, specifically focusing on JPEG. This area has potential for applications in efficient data storage and retrieval." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates whether Compressed-Language Models (CLMs) can understand files compressed in the JPEG format. The authors test the models' capabilities in three aspects: recognition of file properties, handling of anomalous files, and generation of new files. \nThe study uses simple image datasets (MNIST and CIFAR-10) presented in encoded format as sequence data to train a small LLaMA-like model to conduct the experiment. The results suggest the model can effectively perform these tasks without decompression." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper claim that the focus of the research is on testing the understanding capabilities of compressed language models (CFFs). Besides, they draw the conclusion that “CLMs can understand the semantics of compressed data”. But the test is conduct on only one model trained by the author, instead of any existing language models. Therefore, I think the result is insufficient to support the conclusions drawn in the paper.\n It looks that JPEG-encoded formats exhibit language-like properties like any other sequences and the object is also to optimize for next-token prediction. It is not that surprising to see a model trained on sequence data works well within the same datasets (CIFAR-10 and MINIST, though as encoded data). Therefore, I think the main finding of this paper lacks some novelty.\nThe paper outlines the characteristics of compressed file formats (CFF) and the challenges compressed language models (CLMs) encounter when meeting CFFs but does not clarify enough the need for CLMs to address CFFs or the importance of testing their understanding capabilities." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. File anomaly handling (section 3.2) only considers one-token (one-byte) perturbation. How realistic such anomaly exists in real world applications and results in actual problems? \n\n2. line 361, \"For this procedure, we only consider 10 files (one per class) for each dataset\". How do the 10 files produce a result that \"15% of the anomalous files are broken\"? Did I miss anything?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Evaluating language model's capability on JPEG byte stream is interesting. It is a bold idea with a potential to show unseen capability or to reveal important limitations of language models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies whether language models trained on compressed file format can be used on three tasks on JPEG files, including recognition of file properties, handling anomalies and generating new files." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Models trained on text and models trained on compressed data have significantly different token space. While the token space of text is demonstrated learnable with various language models, the binary streams produced by compression algorithms may not have generic patterns that are generalisable for a variety of compressed data. It has been argued that the data distribution properties may be the key that drives in-context learning in transformers[1]. I feel that a detailed examination of the token distribution in the compressed data should be provided to justify the approach.\n\n[1] Chan, S., Santoro, A., Lampinen, A., Wang, J., Singh, A., Richemond, P., McClelland, J. and Hill, F., 2022. Data distributional properties drive emergent in-context learning in transformers. Advances in Neural Information Processing Systems, 35, pp.18878-18891. \n\n2. The experiments are done on images with very small dimensions (28x28 for MNIST and 32x32 for CIFAR, additional experiment in appendix with an image dimension of 64x64 ). It is not a surprise that a large model can fit the small search space and provide predicting and generative capability on these datasets. \n\n3. It is likely the language model is tuned to overfit a set of small images. This is little evidence based on the technical presentation of this paper that the model has learned the format of JPEG, therefore the method is unlikely to generalise to data in compressed file formats." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1. The exploration of LLMs to directly handle compressed data is interesting. Even if it may not turn out to be of notable relevance in practice, it may help to shed more insights into the limitations of LLM.\n\nS2. The evaluation tasks are reasonable first steps and shed some light into the compression/decompression capabilities of plain LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Experimental study that trains and evaluates a decoder-only Transformer model directly on JPEG byte streams (= JPEG-compressed images). Training is done autoregressively as usual. Evaluation tasks are (i) predict JPEG image quality setting and class of example, (ii) detect/locate/fix single-byte errors, (iii) data generation. The key takeaway is that this works reasonably well." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. Paper positioning not convincing. The paper is motivated by a large argument that directly working on compressed format is beneficial. These arguments, however, are inherently flawed. First, arguments about ubiquity, compactness and generality are invalid because if one simply decompresses before training the LLM, these advantages would still hold. Second, while I do see the worth of studying compressed file formats (see S1), I fail to see practical relevance. On the one hand, real-world machine learning pipelines do consist of many domain-specific techniques; e.g., data augmentation (e.g., crop/scale images), helpful tokenization (e.g., SentencePiece), task-specific training objectives (e.g., BERT training) or models (e.g., CNNs). On the other hand, spending resources to \"teach\" a model to decompress/compress when we actually know how to do this more efficiently (JPEG encoder/decoder) is a waste of resources.\n\nW2. Training and evaluation setup not convincing. For task (i), the prediction targets of image quality and class are fed into the training process in a somewhat contrived way to deal with problems of decoder-only models for this task. It's not clear why a decoder-only LLM is the right approach in the first place. For task (ii), the authors make \"erroneous bytes\" are less likely than \"correct bytes\" arguments. But when doing so, they ignore the entire input after the erroneous token (for localization/correction). This, again, a consequence of using decoder-only models. For task (iii), the automatic check is solely on file validity, but ignores the quality of the generated samples (other than the anecdotal examples of Fig. 3).\n\nW3. Limited insight. This is for two main reasons. First, the papers make broad claims about compressed file formats, but then only considers JPEG, includes JPEG-specific information into the training pipeline (quality setting), and use only one image size. Second, the paper puts too much focus on whether tasks (i)-(iii) work reasonably well with an out-of-the-box LLM training pipeline. What's much more interesting, however, is exploring where such approaches would fail and why." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This study shows that Compressed-Language Models (CLMs) can understand and operate on compressed file formats, like JPEG, by recognizing properties, handling anomalies, and generating files directly from byte streams." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024compressedlanguage,\ntitle={Compressed-Language Models for Understanding Compressed File Formats: a {JPEG} Exploration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3d6awrrpUq},\nnote={under review}\n}" }, "abstract": { "value": "This study investigates whether Compressed-Language Models (CLMs), \\ie language models operating on raw byte streams from Compressed File Formats (CFFs), can understand files compressed by CFFs. We focus on the JPEG format as a representative CFF, given its commonality and its representativeness of key concepts in compression, such as entropy coding and run-length encoding. We test if CLMs understand the JPEG format by probing their capabilities to perform along three axes: recognition of inherent file properties, handling of files with anomalies, and generation of new files. Our findings demonstrate that CLMs can effectively perform these tasks. These results suggest that CLMs can understand the semantics of compressed data when directly operating on the byte streams of files produced by CFFs. The possibility to directly operate on raw compressed files offers the promise to leverage the ubiquitous and multi-modal properties of CFFs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Compressed File Formats", "JPEG", "Autoregressive Transformers" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/78bccd47dd3dc0ec532fa46b5826aa56a03d1eda.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Compressed-Language Models for Understanding Compressed File Formats: a JPEG Exploration" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3ddi7Uss2A
What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis
main
Active
hessian;Transformers
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;5;5;6;8
3;4;3;3;3
3;3;4;3;3
2;4;2;3;3
3;2;4;3;4
5.8
3.2
3.2
2.8
3.2
-0.342997
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- In Figure 3, several plots show a mismatch between the predicted and observed Hessian scaling. The top right plot in Figure 3a doesn't display a prediction at all. Could the authors elaborate on these discrepancies?\n- Some analysis is presented more like a log book without explaining why is it important. For example, what are the key takeaways from Figure 4? More broadly, could the authors clarify the overarching message and how the different analyses contribute to it?\n- The paper claims to provide a Hessian-based perspective on the performance gap between Adam and SGD, referencing Ahn et al. (2023). However, this explanation isn't explicitly provided in the paper. Could the authors elaborate on this point and clarify how their analysis explains this performance gap?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- This paper tackles an important theoretical question regarding the dynamics of Transformers by directly analyzing the Hessian. \n- A thorough theoretical derivation and analysis like this is novel and provides a valuable new perspective. \n- The categorization of Hessian dependencies offers a structured framework for understanding the complex interactions within the architecture.\n- The derivations appear sound and are presented with sufficient detail. \n- The exploration of how different Transformer components impact the Hessian adds depth and rigor to the study.\n- The paper is well written and is generally a pleasure to read. The authors incorporate the existing literature nicely. While the Hessian structure is inherently complex, the authors have made a good effort to explain the key takeaways in an accessible way." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper derives and analyzes the Hessian matrix of a single Transformer self-attention layer. It examines how the Hessian depends on the data, the weights, and the attention mechanism's internal moments. The Hessian is found to be highly non-linear and varies significantly across different parts of the self-attention layer. This variation is caused by the way data enters the attention mechanism as keys, queries, and values. It is also due to the softmax function and how the attention mechanism's query and key components are parameterized. These factors create complex relationships between the data, weights, and the Hessian. The authors believe this analysis helps explain why Transformers have a unique optimization landscape. They also suggest it explains why certain architectural choices, such as using adaptive optimizers and layer normalization, are beneficial for training Transformers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper analyses only single layer, without saying much about multi-layer.\n- A lot of important aspects are not addressed, e.g. multi-layer, role of residual connection in the Hessian, multi-head attention. Additionally, can you comment on the implications of (W_V) often being a low-rank matrix with rank (d_k)?\n- The paper doesn't have a solid narrative and rather presents a reader with a bag of tricks. See some of the examples in the Question section below. It also makes claims that are not justified, e.g. that it can help explaining the performance gap between Adam and SGD in lines 516-519.\n- To strengthen the paper's narrative, the author should have started with the analysis of the gradient before delving into the Hessian, since it is much simpler. Comparing and contrasting the properties of the gradient and Hessian could provide a more comprehensive understanding." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please answer the questions mentioned in previous section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper makes an attempt to understand self-attention based models using hessian analysis. This allows authors to compare transformers with architectures such as CNN.\n\n2. The empirical evidence on digit addition task framed as next token prediction task validates the theoretical observations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper compares the self-attention Hessian to classical networks such as CNN to better understand the unique optimization landscape of self-attention based transformer architectures. The paper provides a understanding self-attention from hessian perspective, which is an interesting line to understand the inner workings of transformers. The empirical experiments on digit addition task validates the theoretical observations by considering CE loss." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is not well written and is difficult to follow.\n\n2. Authors should clearly state how their observations leads to better understanding of self-attention. It will also be beneficial for the readers if author mentions the consequences of their observations, such does it lead to better interpretability, or sparse attention or stable training.\n\n3. In section 4.2 author discuss alternative to standard query-key parameterization and discusses change in loss landscape when single matrix W_{QK} is used instead of W_{Q}W_{K}^{\\top}. Authors should discuss it briefly about how this change effects the overall performance in transformers, does it even make any difference in terms of overall performance for specific task or does it have any effect on interpretability of self-attention." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper provides a detailed expression of the Hessian of self-attention, which might be useful for the community for the theoretical understanding of Transformers.\n- The presentation is good. I especially appreciate that the authors write different symbols in different colors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper derived the expression of the Hessian of one self-attention layer and discussed how the special structure of Hessian makes transformer special." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Except the expression of the Hessian, I don't see any deep and detailed analysis in this paper. For example, the authors claim that understanding the structure of Hessian can help understand the optimization of Transformers, such as why Transformers have to be trained by Adam(W). However, I don't see any detailed discussion on this point in the paper. I would like to see a deeper discussion showing that how the stucture of Hessian derived in this paper connects to real Transformer behaviours.\n- This whole analysis is based on a single-layer self-attention. it is unclear how this analysis (or the conclusions drawn from this one-layer model) can possibly extend to deeper models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How do you justify the omission of $\\delta_{XY}$ in Equation (5)? If the elements of $\\delta_{XY}$ are significantly larger than those of $X$, wouldn't the dependency on $X$ in Equation (5) become trivial?\n\n2. Could you clarify the experimental settings used for Figure 4? You mentioned that Softmax significantly reduces the magnitude of the query Hessian block entries, but this effect isn't very apparent in Figure 4." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "$\\bullet$ The derivation of the Hessian of Transformers provides a new framework of analyzing the dynamics of different components of self-attention.\n\n$\\bullet$ The discovery of the data dependencies among key, query, and value matrices is fundamental for future works, both in theoretical understanding and practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work derives the Hessian of Transformers to analyze the data dependence of different components in attention and to compare Transformers with other classical models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The omission of the $\\textbf{F}$–functional Hessian blocks ($\\delta_{XY}$) weakens the overall results, as the influence of $\\delta_{XY}$ on the Hessian remains unclear, and there is no detailed discussion about its role.\n\n2. The analysis is focused on a single self-attention layer and does not extend directly to more complex and practical Transformer architectures. The results are insightful but could benefit from further extensions to deeper, more realistic Transformer models.\n\n3. There is no empirical comparison between Transformers and MLPs/CNNs. Including such empirical comparisons would make the findings more compelling and straightforward to interpret." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- I would be interested in having more details about the settings of the experiments leading to the figures shown in the paper, more precisely for Figure 1 and Figure 4, and the dashed lines in Figure 3. What is exactly plotted ? What kind of data were used to obtain these ? How does it confirm the insights derived from the theoretical derivations ?\n- In Figure 3b, what does \"the dashed lines correspond to the trend estimated from the data points by the linear regression coefficient\" mean ? Can the authors describe the setting behind this experiment and how the dashed lines are obtained ?\n- In Figure 3, all the trends in dashed lines are linear, even though the order of the dependence is changing. This makes me think that the range of values considered for $\\sigma$ is too small to clearly evaluate whether the empirical dependence are following the theoretical ones. Can the authors discuss that, and if possible, show results with a bigger range of values for $\\sigma$ ?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Originality**: To my knowledge, this is the first paper deriving the full expression of the hessian for the self-attention operation.\n- **Significance**: As mentioned in the conclusion of the paper, this work can serve as foundation for better understanding the role of the self attention operation in Transformers. As discussed and shown throughout the paper, the self attention layer has a singular behavior compared to better-understood convolutional or feed-forward layers in neural networks. \n- **Quality**: Although I did not check all the proofs in details, a lot of work has been put to derive Theorems 3.1 and 3.2. The experiments presented in Figure 3 also validates to some extent the theoretical results obtained, in terms of dependence to the training data of two of the diagonal terms.\n- **Clarity**: I appreciated the color-coding of the terms within equations throughout the paper. It makes the reading and understanding of the results easier." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is interested in deriving the full expression of the Hessian for a single self-attention layer, wrt the learned matrix parameters of query, key and values. The hessian is decomposed into two terms, the outer product and functional hessians, and their expressions are respectively given in Theorems 3.1 and 3.2. Then, the paper analyzes the dependence on the data and how different components of the architecture affect the hessian, such as the softmax activation or the position of the layer normalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Clarity**: The links between the empirical results shown in the various figures and the insights derived from the expressions of the Hessian are not always clear. For instance, the experiments and what is plotted in Figure 1 are never described. \n- **Quality**: It is difficult to evaluate the validity of all theoretical insights derived from the hessian since the settings of the experiments are not always described. More specifically, settings behind experiments to obtain Figure 1, Figure 3 and Figure 4." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024what,\ntitle={What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ddi7Uss2A},\nnote={under review}\n}" }, "abstract": { "value": "The Transformer architecture has inarguably revolutionized deep learning, overtaking classical architectures like multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs). At its core, the attention block differs in form and functionality from most other architectural components in deep learning - to the extent that Transformers are often accompanied by adaptive optimizers, layer normalization, learning rate warmup, and more, in comparison to MLPs/CNNs. The root causes behind these outward manifestations, and the precise mechanisms that govern them, remain poorly understood. In this work, we bridge this gap by providing a fundamental understanding of what distinguishes the Transformer from the other architectures - grounded in a theoretical comparison of the (loss) Hessian. Concretely, for a single self-attention layer, (a) we first entirely derive the Transformer's Hessian and express it in matrix derivatives; (b) we then characterize it in terms of data, weight, and attention moment dependencies; and (c) while doing so further highlight the important structural differences to the Hessian of classical networks. \nOur results suggest that various common architectural and optimization choices in Transformers can be traced back to their highly non-linear dependencies on the data and weight matrices, which vary heterogeneously across parameters. Ultimately, our findings provide a deeper understanding of the Transformer’s unique optimization landscape and the challenges it poses." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "hessian", "Transformers" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/52035684b78309b6d3868eccc2cf77c73ddf5217.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3emaMXjdkF
Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning
main
Active
stochastic proximal point methods;federated learning;cross-device setting;arbitrary sampling
optimization
3;3;3;5
3;2;4;2
2;2;2;2
2;2;2;2
1;2;3;3
3.5
2.75
2
2
2.25
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Do any of the proposed algorithm variations achieve any theoretical speedup compared to Local GD with i.i.d. client sampling, beyond an improvement in constant factors?\n2. Is there any way to execute optimal stratified sampling in practice?\n3. How does your algorithm compare experimentally against baselines that use client selection strategies besides NICE sampling, e.g. Power-of-Choice [6]?\n4. In the neural network experiments (Figure 4), how does SPPM-NICE compare against LocalGD when SPPM-NICE uses GD as a local prox solver instead of Adam?\n\n[6] Jee Cho, Y., Wang, J. &amp; Joshi, G.. (2022). Towards Understanding Biased Client Selection in Federated Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The sentence-by-sentence writing is clear.\n2. All of the proofs seem to be correct." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents SPPM-AS, a variant of the stochastic proximal point method that supports various protocols for sampling data. For federated learning, this translates to a federated optimization algorithm that supports various protocols for sampling clients. The method is proven to converge to an $\\epsilon$-approximate solution for strongly convex problems, and experiments show improvements compared to classical baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I don't see any significant theoretical improvement of the proposed algorithms.\n\n 1a. The iteration complexity is $1/\\epsilon$ (Line 212), which does not seem to improve upon FedAvg expect possibly in terms of constant factors. This is not too surprising considering that FedProx does not improve upon FedAvg, but there are already works in FL showing that the order of client sampling can improve convergence rate in terms of the $\\epsilon$ dependence [1]. Therefore the iteration complexity shown in this paper is not a significant improvement.\n\n 1b. The algorithm which enjoys theoretical guarantees cannot actually be implemented, because the choice of hyperparameters requires knowledge of the global minimum. The iteration complexity (Line 212) requires to choose the stepsize based on $\\sigma_{\\*,\\text{AS}}^2$, which can only be computed with knowledge of $x_*$.\n\n 1c. The only possibility for theoretical improvement is an improvement of constant factors from optimal stratified sampling (Lemma 2), but *optimal stratified sampling cannot be computed without knowledge of the global minimum*.\n\n2. I don't see any significant experimental improvement due to issues with the experimental methodology.\n\n 2a. The experimental evaluation only compares against naive baselines of Local GD and Minibatch GD. There are a huge number of works in FL that try to improve optimization with different client selection strategies, and these works are essentially ignored by this experimental evaluation (see [2] and its references). The authors' claim of $74$% improvement compares against the naive baseline, not against state-of-the-art (or any of the relevant existing work).\n\n 2b. SPPM-SS cannot be implemented with real data. As I pointed out in Weakness #1c, optimal stratified sampling cannot be computed without knowledge of the global minimum. To run experiments, the authors instead use a clustering heuristic that stratifies clients according to features, clustered using K-means. However, it is unclear whether such a clustering procedure can be executed in a real federated learning scenario when client data must remain on-device. Without this, a significant portion of the experimental results (Figures 1, 2, part of 3) only describe an algorithm which cannot be implemented in practice.\n\n 2c. The neural network experiments (Figure 4) may not be a fair comparison between LocalGD and SPPM-NICE. SPPM-NICE uses Adam as a local prox solver, which may not be a reasonable comparison against LocalGD, since LocalGD does not include any update preconditioning (known to be important for neural network training). It would be more appropriate to compare SPPM-NICE against LocalGD when SPPM-NICE uses GD as a local prox solver. An alternative is to compare SPPM-NICE w/ Adam against a local version of Adam, for example FedAdam. Appendix E.4 contains NN experiments with different local solvers (Figure 16), but I don't see exactly how these results related to those in Figure 4. It looks like the choice of local solver can create a gap of about 6\\% in train accuracy, and this is described as \"all methods perform similarly\" (Line 1526), whereas a similar gap between LocalGD vs. SPPM-NICE in Figure 4 is described as \"enhanced performance\" (Line 513).\n\n3. The paper exaggerates its own contribution and ignores relevant previous work. There are a huge number of works that improve federated optimization with different client selection strategies, which are ignored by this paper in terms of theory, experiments, and general framing (see [2] and its references). Some examples of exaggerated language that I find inappropriate:\n- Abstract: \"Virtually all FL methods operate in the following manner...\" This claim is not accurate; there are many works in FL that use peer-to-peer communication [3], asynchronous communication [4], etc. Further, I fail to see how the proposed algorithms of this paper do not also fall into the category described in the abstract.\n- Line 524: \"This foundational work showcases a pivotal shift in federated learning strategies\". I don't believe that this work departs very far at all from previous work in FL (e.g. [5] and related works). In my opinion, this kind of self-aggrandizing is not appropriate for a scientific publication.\n\n4. The message of the paper is not totally coherent. The abstract talks about \"cohort squeeze\" and novel communication principles, but most of the paper actually deals with client selection strategies within the standard intermittent communication structure. The experiments discuss local vs. global communication (Section 3.6), which seems to be the connection to the \"cohort squeeze\" of the title and abstract, but this section makes up a very small part of the paper's technical content. Perhaps I have missed a connection between the content of the abstract and the content of the main text.\n\n[1] Cho, Yae Jee, et al. \"On the convergence of federated averaging with cyclic client participation.\" International Conference on Machine Learning. PMLR, 2023.\n\n[2] Fu, Lei, et al. \"Client selection in federated learning: Principles, challenges, and opportunities.\" IEEE Internet of Things Journal (2023).\n\n[3] Beltrán, Enrique Tomás Martínez, et al. \"Decentralized federated learning: Fundamentals, state of the art, frameworks, trends, and challenges.\" IEEE Communications Surveys & Tutorials (2023).\n\n[4] Xu, Chenhao, et al. \"Asynchronous federated learning on heterogeneous devices: A survey.\" Computer Science Review 50 (2023): 100595.\n\n[5] Grudzień, Michał, Grigory Malinovsky, and Peter Richtárik. \"Improving accelerated federated learning with compression and importance sampling.\" arXiv preprint arXiv:2306.03240 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see my comments in the weakness part." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is easy to read in general." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper applies the stochastic proximal point method (SPPM) to federated learning. Convergence analysis of SPPM with strongly convex objectives are given, experiments showing that SPPM can reduce the total communication cost compared with FedAvg." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In section 2.2, the author(s) discussed some properties of the SPPM-AS, but I cannot find the communication cost analysis of SSPM-AS, which is the most important factor of FL algorithms. Theoretically, how does the total communication cost of SPPM-AS compared with existing FL algorithms such as FedAvg, FedProx, SCAFFOLD, etc.\n- Similar to the question above, how is $prox_{\\gamma f_{S_t}} ( x_t )$ being solved? There must be some communication between $S_t$ during the optimization, how expensive is the communication?\n- Table 1 is not very easy to read. I did not fully get the meaning between 313-323 when I read it for the first time.\n- In line 340, how is $\\tilde{f}_i$ defined?\n- In experiments, how to solve the proximal point problem is kind of vague, what is the local communication cost and how is the local communication cost being controlled in each experiment?\n- Federated leaning has been studied many years. The baseline methods in the experiments is limited (FedAvg), the author(s) should include some more recent FL algorithms." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. The authors claim that increasing the number of local communication rounds can reduce the total cost. Does this claim hold for all numbers of local communication rounds, or is there a tradeoff between local communication rounds and total cost?\n2. The authors state that the stratified sampling optimal clustering is impractical, so they employ a clustering heuristic which is K-means. What are the differences between these two methods? \n3. The authors indicate that stratified sampling outperforms nice sampling. Why do they provide the experiment results of CNN under nice sampling rather than stratified sampling?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper introduces a new cross-device federated learning framework called SPPM-AS that supports arbitrary sampling strategies. The effectiveness of SPPM-AS is validated through both theoretical analysis and numerical experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Based on SPPM, this paper proposes SPPM-AS, a cross-device federated learning framework that supports arbitrary sampling strategies. The performance of SPPM-AS is evaluated both theoretically and numerically." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The presentation of the paper needs to be improved. For example, it is not easy to follow the paper since a lot of important discussions and results are presented in the appendix. \n2. A detailed explanation of Algorithm 1 or SPPM should be provided to improve better reader understanding. \n3. The theoretical analysis is based on the strongly convex assumption. Extending the analysis to a more general non-convex setting would strengthen the paper.\n4. The comparisons between different sampling methods are based on simplified settings, e.g., $b$ clusters of uniform size $b$, with blocking size and the number of blocks set as 2.\n5. The authors only provide experiments on logistic regression using datasets from the LibSVM repository and on CNN with FEMNIST dataset, which are relatively simple. To better demonstrate the performance of SPPM-AS, experiments on more complex datasets (e.g., CIFAR-100, Shakespeare) and tasks (e.g., NLP) are recommended.\n\n\nMinor:\n1. Notations should be explained when they first appear in the paper, e.g., $n$.\n2. In line 93, \"dashed line\" should be corrected to \"dashed red line\".\n3. Abbreviations should be defined upon their first appearance in the paper, e.g., $HP$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The research provides a thorough theoretical underpinning to the SPPM-AS method, complete with convergence proofs. This not only bolsters the credibility of the approach but also offers a deeper understanding of its operational dynamics. The paper goes beyond mere theoretical exposition by delivering comprehensive interpretations of the theoretical outcomes, making the material more accessible and applicable for readers.\n\n2. A significant aspect of the paper is its in-depth exploration of diverse sampling strategies, each accompanied by a detailed explanation and analysis. The authors present the sampling variance for each strategy and offer a comparative analysis, highlighting the nuances and implications of choosing one strategy over another. This meticulous examination of sampling strategies enriches the paper's contribution to the field of federated learning.\n\n3. The empirical validation of the theoretical findings is a testament to the practical viability of the SPPM-AS method. Through a series of extensive experiments on both convex and non-convex models, the paper demonstrates the method's robustness and effectiveness in real-world scenarios. These experiments solidify the theoretical claims and showcase the method's potential to be integrated into existing federated learning frameworks, thereby bridging the gap between theory and practice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an innovative method in the domain of federated learning, breaking away from the conventional approach where client cohorts interact with a central server solely once per training cycle. The authors have developed SPPM-AS (Stochastic Proximal Point Method with Arbitrary Sampling), a technique that facilitates additional communication rounds within each cohort, potentially slashing the overall communication expenditure needed for cross-device model training.\n\nTheoretical underpinnings of SPPM-AS are thoroughly examined, with a focus on its convergence characteristics, and are juxtaposed with those of traditional methods. The study delves into the effects of various hyperparameters—including the learning rate and frequency of local communications—on algorithmic performance. Empirical evaluations conducted across both convex (logistic regression) and non-convex (neural network) models substantiate the method's proficiency in lowering communication expenses without compromising accuracy, and in some cases, even enhancing it over current methodologies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The core novel contributions of this paper are still unclear to me. I appreciate the detailed explanations of the theoretical results and the examples with various concrete sampling strategies. However, the novel algorithm SPPM-AS seems to heavily rely on SPPM, which can already be applied directly to the federated learning setting (Equation 1). I want to understand the technical differences and contributions of SPPM-AS compared to the SPPM algorithm. Please provide a more explicit comparison between SPPM-AS and SPPM, highlighting the key technical differences and innovations. \n\nMoreover, It appears that this paper eliminates the need for the second-order similarity condition in SPPM. How eliminating the second-order similarity condition can be achieved in your proof is of great interest to me.\n\nLast but not least, explain in more detail how the multiple communication rounds within cohorts contribute to the novelty of the approach." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Contrary to current practice of federated learning, we show that it's better for a cohort to be involved in more than a single communication round." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024cohort,\ntitle={Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3emaMXjdkF},\nnote={under review}\n}" }, "abstract": { "value": "Virtually all federated learning (FL) methods, including FedAvg, operate in the following manner: i) an orchestrating server sends the current model parameters to a cohort of clients selected via certain rule, ii) these clients then independently perform a local training procedure (e.g., via SGD or Adam) using their own training data, and iii) the resulting models are shipped to the server for aggregation. This process is repeated until a model of suitable quality is found. A notable feature of these methods is that each cohort is involved in a single communication round with the server only. In this work we challenge this algorithmic design primitive and investigate whether it is possible to “squeeze more juice” out of each cohort than what is possible in a single communication round. Surprisingly, we find that this is indeed the case, and our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting. Our method is based on a novel variant of the stochastic proximal point method (SPPM-AS) which supports a large collection of client sampling procedures some of which lead to further gains when compared to classical client selection approaches." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "stochastic proximal point methods", "federated learning", "cross-device setting", "arbitrary sampling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/019cece041f30b7e430f8a8c4f4c153f1f529bc4.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e756187afc4e1d043496695e0b625f19d884dc8c.zip" }, "title": { "value": "Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3ep9ZYMZS3
Model-Agnostic Knowledge Guided Correction for Improved Neural Surrogate Rollout
main
Active
deep learning;knowledge guided machine learning;scientific machine learning;computational fluid dynamics;reinforcement learning
applications to physical sciences (physics, chemistry, biology, etc.)
1;3;3;6
4;4;3;3
1;2;2;3
1;2;2;3
3;2;2;3
3.25
3.5
2
2
2.5
-0.70014
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What are some real science or engineering-based problems/case studies that suffer from rollout errors? What is their dimensionality and what time-scales are relevant for these problems with respect to decision-making (e.g., control problems where one needs to perform an action within fractions of second or other?). \n- Based on above answer, how would this method scale to real systems (if they are different to the presented 2D N-S problem)?\n- How is this work relevant or different to earlier hybrid modeling work developed in process systems engineering or for control starting in the 1990's?\n- How would only surrogate techniques predictive errors change if they were re-trained with new data from simulator (if this was not done already)? In other words, is it the hybrid modeling structure or the adaptability or both that are novel and effective in this work?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written and organized. \n- The paper merges concepts from different fields (hybrid modeling + neural surrogates) in interesting ways. \n- The results that are presented are favorable for the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper the authors tackle model prediction mismatch due to rollout, by proposing a technique that merges hybrid modeling with neural surrogates. The framework differs from previous literature given that it does not use an \"only-surrogate\" approach, but it also uses on-demand data from a rigorous simulator, when it identifies that this is needed. Results were presented using a 2D Navier-Stokes problem, showing that the proposed method provides improved predictions when compared to a random approach, a purely-surrogate approach, and under challenging scenarios of noise and varying physical conditions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In the introduction, and in the results presented, motivation describing real scenarios of rollout is missing. The authors mention the fact that rollout is an issue for simulations, but do not cite or list any real examples. Providing such examples and refs even in the introduction would strengthen the paper. \n- The authors present results only using a 2D Navier-Stokes benchmark problem. This further points to the previous comment on motivation. Moreover, using only this problem does not address the scalability of the proposed approach. How would this approach perform in terms of accuracy and computational cost for larger simulations with many dimensions, parameters, variables affecting predictions? If the authors cannot a larger example in supplementary, they could at least add a discussion on this.\n- There is a large body of literature in hybrid modeling (starting from the 1990's, where different structures of model correction or different fidelity of models are embedded) that is relevant to this work that is not mentioned at all in this paper. The authors should include an earlier reference and clearly describe the novelty of the proposed work compared to earlier work as well.\n- The comparisons presented with only pre-trained surrogates do not seem as fair, unless I have misunderstood the approach. The HyPER approach continuously updates the models by getting new data from high-fidelity simulation. The only-surrogate approaches do not (again, unless I misunderstood). It is thus expected that the HyPER approach would outperform all else. This can be ok, if one considers that the novelty of the HyPER framework is it's adaptive nature. However, given that when new data comes, some re-training happens, would it not be fair to allow for the surrogate-only approaches to also be re-trained with new data? It is likely that they would still perform worse, or require more training time, but such a comparison would help better explain the true novelty of the framework." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could you please share the mathematical equations for your model (see corresponding weakness above)?\nCould you please share training details of HyPER (see corresponding weakness above)?\nCould you explain how HyPER can theoretically reduce the already accumulate error by using a simulation?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This article focuses on the hard goal of reducing rollout error.\nThe article contains a pragmatic approach to cases where the simulation capabilities may not be differentiable (due, for example, to legacy code).\nThe article lays out clearly the hypothesis that the authors want to test about HyPER.\nThe article has an interesting way to incorporate computational cost into the training objective function." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article represents an effort towards alleviating the rollout errors in neural surrogates for modeling transient dynamics. The authors assume that end-to-end training is not possible because the simulator is not differentiable. They optimize a RL policy that decides to step forward in time either with an accurate non-differentiable solver or with a neural surrogate, the resulting method is called Hybrid PDE Predictor with RL (HyPER). The reward function is a combination of an error term and a computational cost term, which limits the number of calls to the solver. Most of the article contains numerical experiments on 2D Navier Stokes and Subsurface flow applications. The experiments try to assess the benefits of the method 1) against surrogate model only approaches (UNet and FNO), 2) against change of physical conditions, 3) against noisy data, 4) against a random policy, 5) against cost/accuracy trade-offs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The article lacks details on the model. The fact that the approach is model-agnostic does not mean that the details to make the methodology reproducible should be omitted. \n- The model equations that would predict one rollout at described in Figure 2 are missing. \n- It is not clear why the authors are limiting their approach to one-step auto-regresssive models instead of unrolled networks for the model or the baseline. \n- Details on the computational cost of training the policy as well as its implementation are missing, which makes the results hard to reproduce. RL is known to be unstable when training, the authors should communicate the overall computational cost of training, what hyperparameters they needed to choose, how they choose them.\n- The reported results don't have any error bars.\n- Some terms are not explained, such as y in equation 6.\n- The notations are not consistent across equations (between Eq. 4 and Eq. 6 for example). \n- The illustrative 2D examples are weak baselines (see https://arxiv.org/abs/2407.07218 for more context) that are not representative of the scale of computation where such method would useful. \n- There is no discussion on the convergence with respect to the number training points, and how this would scale with more challenging 3D problems. \n- To finish, the authors' interpretation that the simulation step would correct the accumulated rollout error from the surrogate model is not substantiated. Such a statement is not supported by the equations because u(x,t) is unaltered to compute u(x, t+delta_t), so the error that is already accumulated in u(x,t) can't be reduced.\n\nNote that the number of pages of the manuscript is one page over the strict maximum of ICLR submissions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What is the reason for comparing a knowledge-guided method (with physics or changing BC known) to two data-driven approaches?\n\n- How is HyPER compared to Hybrid approaches and Sim-only approaches?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper is well-written and easy to follow. The aspect of using RL in rollout error reduction is new." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose the Hybrid PDE Predictor with RL (HyPER) model, which utilizes the reinforcement learning that combines a neural surrogate and a physics simulator to reduce surrogate rollout error significantly. This method is knowledge guided and model-agnostic. Here RL is used to decide incorporation of simulators in the loop. HyPER is compared to FNO and U-Net approaches in both accuracy and efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The motivation and necessity of using RL with action space {0 = call surrogate, 1 = call simulator} is questionable. If physics knowledge is known, why not directly use simulators for all steps? According to Fig. 6, the computational cost reduction is not that significant. There is no accuracy comparison between HyPER and Sim-Only, but it can be predicted that Sim-Only can be more accurate. I would suggest the authors include a Sim-Only baseline in their comparisons. In Table 4, when there is no noise, the Random Policy and HyPER have almost the same accuracy, which suggests the RL here is not meaningful. \n\n- HyPER is compared to two surrogate baselines: UNet-Only and FNO-Only, and improves the performance significantly. However, HyPER is knowledge-guided with PDE form known and invoked simulator, but the baselines do not require PDE knowledge. HyPER can perform better because of the knowledge imposed. The comparison is not fair. \n\n- When changing physics conditions, the HyPER is trained with a simulator that is “fully aware of the changed boundary condition”, but “both surrogate models (UNet and FNO) are not re-trained with the changed boundary condition”. Again, the comparison is not fair. The improvement is from the knowledge of PDE conditions.\n\n\n- Could the authors clarify what “Error” and “Cost” specifically represent in formula (3)?\n\n\n- The paper does not sufficiently detail the parameters and training strategies employed. The SUG and S are not specified." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. l25. Define RL at its first appearance. \n2. Eq. (3). How are 'Error' and 'Cost' defined? The error is estimated with respect to which quantity?\n3. Eq. (4). The total reward R seems to be a function of the true solution field u(x,t). Since the ground truth is not available in the inference period, how the reward will be calculated is unclear to me.\n4. l199. The diffusion coefficient is taken as 0.01. Is it giving rise to laminar flow? What is the Reynolds number? It will be interesting to see the performance in high Reynolds numbers or in small diffusion coefficients. Since high rollout error generally occurs at long prediction horizons for turbulent flows more than laminar flows.\n5. In the 2D Navier Stokes example, the authors consider only 20 timesteps, which is very small when considering long-term predictions. However, in the Subsurface Flow example, the authors seem to consider 100 timesteps, which is considerable.\n6. How many time steps are used for training and how many for testing is not mentioned. If all the time steps are used during training, it defeats the purpose since, in practice, the neural surrogates can not be trained for finitely very long prediction horizons.\n7. Table 1. Since the HyPER is pre-trained on 400 samples and the intelligent RL policy is fine-tuned on another 400 samples, the compared methods should also be trained on 800 samples since 800 datasets are already available. This seems to be acknowledged by the authors in l340.\n8. Why do the fine-tuned models in Fig. 4(b) provide a higher error? Should the fine-tuned models not provide better accuracy than the pre-trained models?\n9. l315. To keep a fair comparison, like the UNet and FNO are not re-trained with the changed boundary condition, the performance of HyPER should also be tested without fine-tuning the intelligent RL policy.\n10. The authors should also mention the number of parameters of the models.\n11. Fig. 5(a). Are the time for all the methods computed on the same type of device, i.e., CPU or GPU? \n12. Alongside the time in Fig. 5(a), it will be interesting to see when the costly computational simulator is activated during inference." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is written clearly and well-presented, and sufficient illustrations in terms of figures and tables are provided to support the claims of the authors. The hybrid concept of the rollout error correction using a simulator without needing the simulator to be differentiable is original." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose the Hybrid PDE Predictor (HyPER), which invokes costly computational simulators as knowledge-guided correction to reduce the rollout prediction errors of neural network surrogates whenever required. The proposed model relies on a reinforcement learning policy to invoke the simulator in a cost-aware manner. The resulting framework reduces the rollout error on in-distribution, out-of-distribution, and noisy data and outperforms the compared neural surrogates, at least in the studies presented by the authors." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Except for the switching mechanism, the proposed HyPER provides no significant contribution to the existing literature. \n2. While it is true that neural surrogates produce high rollout errors at long prediction horizons, many recent works [1-6] have been carried out to address this issue. However, these are not mentioned by the authors. In order to correctly acknowledge the effectiveness of the proposed framework, a comparison against some of these frameworks is necessary.\n3. The proposed framework uses a hybrid mixture of neural surrogates and costly computational simulators. However, the comparisons are performed against data-driven surrogates. Given the literature on differential physics and other hybrid methods (mentioned by the authors in the paper), it is necessary to compare the HyPER against some of the robust hybrid simulators. \n4. In addition, when the neural surrogates lose temporal correlations with the initial time steps in long prediction horizons, it may be required to perform repeated predictions using the costly computational solvers. In such cases, the cost of simulation is equivalent to directly solving computational solvers like FEM and FDM. \n\n\n[1] Fatone, Federico, Stefania Fresca, and Andrea Manzoni. \"Long-time prediction of nonlinear parametrized dynamical systems by deep learning-based reduced order models.\" arXiv preprint arXiv:2201.10215 (2022).\n[2] Wang, Sifan, and Paris Perdikaris. \"Long-time integration of parametric evolution equations with physics-informed deeponets.\" Journal of Computational Physics 475 (2023): 111855.\n[3] Zeng, Ailing, et al. \"Are transformers effective for time series forecasting?.\" Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 9. 2023.\n[4] Navaneeth, N., and Souvik Chakraborty. \"Waveformer for modeling dynamical systems.\" Mechanical Systems and Signal Processing 211 (2024): 111253.\n[5] Lippe, Phillip, et al. \"Pde-refiner: Achieving accurate long rollouts with neural pde solvers.\" Advances in Neural Information Processing Systems 36 (2024).\n[6] Liu, Xin-Yang, et al. \"Multi-resolution partial differential equations preserved learning framework for spatiotemporal dynamics.\" Communications Physics 7.1 (2024): 31." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel model-agnostic, cost-aware method which combines a neural surrogate, decision model, and simulator to significantly reduce rollout error when performing time-series PDE prediction." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024modelagnostic,\ntitle={Model-Agnostic Knowledge Guided Correction for Improved Neural Surrogate Rollout},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3ep9ZYMZS3},\nnote={under review}\n}" }, "abstract": { "value": "Modeling the evolution of physical systems is critical to many applications in science and engineering. As the evolution of these systems is predominantly governed by partial differential equations (PDEs), there are a number of sophisticated computational simulations which resolve these systems with high accuracy. However, as these simulations incur high computational costs, they are infeasible to be employed for large-scale analysis. A popular alternative to simulators are neural network surrogates which are trained in a data-driven manner and are much more computationally efficient. However, these surrogate models suffer from high rollout error when used autoregressively, especially when confronted with training data paucity (i.e., a small number of trajectories to learn from). Existing work proposes to improve surrogate rollout error by either including physical loss terms directly in the optimization of the model or incorporating computational simulators as `differentiable layers' in the neural network. Both of these approaches have their challenges, with physical loss functions suffering from slow convergence for stiff PDEs and simulator layers requiring gradients which are not always available, especially in legacy simulators. We propose the Hybrid PDE Predictor with RL (HyPER) model: a model-agnostic, RL based, cost-aware model which combines a neural surrogate, RL decision model, and a physics simulator (with or without gradients) to reduce surrogate rollout error significantly. In addition to reducing rollout error by 60%-90% we show that HyPER learns an intelligent policy that is adaptable to changing physical conditions and resistant to noise corruption." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "deep learning", "knowledge guided machine learning", "scientific machine learning", "computational fluid dynamics", "reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7f57435addc3657c3e87d06ec2ce12c91b5d7026.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9699e2ccbdd78909e6eb90ae88e2b34bd90cb2b2.zip" }, "title": { "value": "Model-Agnostic Knowledge Guided Correction for Improved Neural Surrogate Rollout" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3f8556SIEn
MEDIC: Zero-shot Music Editing with Disentangled Inversion Control
main
Active
Zero-shot Music Editing;Inversion Techniques;Attention Control
generative models
3;3;5;5
4;5;5;3
2;2;3;3
2;2;2;3
2;1;2;1
4
4.25
2.5
2.25
1.5
-0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "The introduction and methodology has some places that are unclear to me.\n\n1. What is rigid and non-rigid editing in the context of music editing?\n2. In 3.3.1 Global Attention Refinement, equation 2, should the second case be $(M_t)\\_{i,A(j)}$ instead of $(M_t)\\_{i,j}$?\n3. In 3.3.1 Local Attention Blends, the definition of $\\textrm{Threshold}(\\cdot,k)$ does not match the format in equation 3. Also, what are the choices of $k$ and how will different $k$ affect the results?\n4. In 3.3.1 Scheduling Cross-Attention Control, the usage of $\\textrm{Refine}(\\cdot,\\cdot)$ does not match the definition in equation 2.\n5. In 3.3.1 Scheduling Cross-Attention Control, what is the choice of $\\tau_c$ and how will it affect the results?\n6. In figure 2, the harmonic branch outputs something that looks like a \"rapid guitar music.\" Is it an observable phenomenon in experiments, or is it just an assumption? Does the upper part handle non-rigid editing and the lower part handle rigid editing only?\n7. In table 4, why are there no results for [0, 0, 0]?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Good zero-shot editing performance compared to previous STOA. The demo page shows the effective controllability of some music concepts that previous models failed to control.\n2. The benchmark is very useful for future researchers on music editing.\n3. The methodology of Harmonized Attention Control and Disentangled Inversion Technique is novel, which could help zero-shot editing of other domains." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper propose a new approach to do zero-shot music editing by Disentangled Inversion Control, which integrates multiple methods to inject the diffusion process. A novel benchmark is also proposed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the experiments are about music editing, the evaluation only uses metrics for general audio editing. Music content-related metrics like chroma distance [1] are missing.\n2. The paper does not seem to be clear enough. See questions.\n3. The values and effects of the hyperparameters in the paper are unclear, like $k, \\tau_c, L$ and $S$. Ablation study or case study by changing these hyperparameters would be helpful to understand the model.\n4. While the methodology seems to be general-purposed, the experiments only focus on music editing. This is okay but limits the impact of the paper a bit.\n5. Typos and formatting errors. Algorithm 1: Inconsistency use of fonts; formatting error of \\hat in $\\hat{M}^{\\textrm{tgt}}$; $\\epsilon_{c_{\\textrm{tgt}}}$ should be $\\epsilon_{\\textrm{tgt}}$. Figure 2: inconsistenct notation $M_{\\textrm{tgt}}$ vs. notation in text $M^{\\textrm{tgt}}$. Equation 9: $l$ is not defined. Table 6: not referenced in the appendix text.\n\n[1] Zhang, Y., Ikemiya, Y., Xia, G., Murata, N., Martínez-Ramírez, M. A., Liao, W. H., ... & Dixon, S. (2024). Musicmagus: Zero-shot text-to-music editing via diffusion models. arXiv preprint arXiv:2402.06178." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The benchmark dataset proposed in the paper is a good idea, but upon reviewing it, I found that it only includes a single audio file. Could the authors further clarify what constitutes the ground truth in this context?\n\n- Finally, I am very curious about the computational efficiency of this method. Does it require more time and resources compared to baseline methods?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The main idea of this paper—incorporating mutual self-attention, cross-attention control, and harmonic control—is sensible, even though each module is not entirely novel. The combination of these mechanisms appears effective, as results indicate that combining them enhances model performance in music editing tasks, providing useful insights.\n\n- The paper is thorough in its experimental design, including both subjective evaluations and a variety of objective experiments. The results effectively demonstrate the validity of the chosen methods for the model.\n\n- The discussion of related work is comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper primarily discusses a method for enhancing the performance of zero-shot music audio editing tasks through multiple control mechanisms, referred to by the authors as Disentangled Inversion Control. Additionally, the paper contributes a benchmarking dataset based on MusicCaps, aimed at evaluating the performance of zero-shot music editing models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although this paper is a strong empirically-driven study, there are certain hypothesis-related issues that could be improved.\n\n- First, the paper needs to clarify what is meant by “rigid” and “non-rigid” tasks. These terms appear throughout the paper, but after re-reading the entire text, I still found no clear explanation of what these tasks entail, which left me quite confused.\n\n- The paper actually addresses a text-guided music audio editing task. However, the language and context in the main body do not consistently maintain this focus. Given the current context, I suggest aligning terms in the main text to match the title, shifting from “audio editing” to “music editing.”\n\n- While the proposed multiple control method indeed focuses on different aspects through each control mechanism, whether this approach achieves “disentangled” control is debatable. To demonstrate that the controls are disentangled, the paper should include experiments showing that one control does not interfere with another. While these controls focus on different levels conceptually, they do not intuitively seem orthogonal, making the term “disentangled” potentially misleading. I suggest either adding experiments to confirm this or revising the terminology.\n\n- The paper includes a subjective evaluation, which is commendable. However, the description of this evaluation is incomplete. Typically, subjective evaluations should also describe the gender, age, music background, and musical training distribution of the subjects, which helps with the interpretation of the results. Unlike data annotation, where these factors might be less crucial, they are important here due to potential biases introduced by AMT, and these underlying biases should be considered.\n\n- In addition, hypothesis tests should be conducted for all reported results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "**Questions**:\n\n1. Objective Metrics\n\nFirst and foremost, since evaluating audio/music editing tasks is not trivial, I think it is crucial to carefully select the evaluation metrics and design the evaluation protocol. On top of this, I have several questions about them.\n\n- Regarding the FAD, LPAPS, and CLAP Score, could you specify which checkpoints were used to calculate each metric?\n- For FAD, if it was calculated using VGG-ish, recent literature (such as in [1]) indicates that this model may not be appropriate for evaluating music samples. To support the effectiveness of the proposed method, I recommend using alternative pretrained models as suggested in [1].\n - For example, based on the correlation between MOS and FAD discussed in [1], it would be more appropriate to use FAD with LAION-CLAP to evaluate musical quality, and to use FAD with DAC/EnCodec embeddings to assess acoustic quality (please see more detail in [1]). \n- For LPAPS, if the authors used the pretrained network from this repository [2], the network was trained on the VGGSound dataset [3] (also see its data filtering process). This raises some concerns regarding its validity for numerical evaluation in music editing. Additionally, the checkpoint provided in that repository [2] has other issues. As noted in this issue [4], the authors of this repository acknowledge problems in the training procedure of the LPAPS itself, and thus, they do not recommend using the LPAPS for at least training purposes. \n - It would be appropriate to calculate the L2 distance using other audio encoders trained properly. For instance, as in [1], I recommend calculating the L2 distance based on embeddings from audio encoders like LAION-CLAP, DAC, or EnCodec.\n- Besides, could you provide at least an intuitive explanation, if not a theoretical one, supporting why LPAPS is suitable for evaluating \"consistency\"?\n- CLAP model: Appendix C refers to this repository [5], but I couldn’t find the CLAP model there. Could you clarify this in more detail?\n\n\n2. In cases where a source prompt $\\mathcal{P}$ is provided, is there a benefit to using the inversion process in L3 in Algorithm 2? It seems that just using the proposed attention control technique in Section 3.3 during reverse sampling alone might be sufficient. From an ODE perspective, the score function at a given timestep should be almost the same in both forward and backward directions in terms of conditioning. The difference between them would be the accumulated errors from $z_{0}$ and $z_{T}$. If L3 were removed, it would no longer be 'inversion'. It would be 'text-guided music editing by attention map control' such as in MusicMagus.\n\n3. The definitions of \"rigid\" and \"non-rigid\" tasks mentioned in the Introduction are unclear in the paper, leaving some doubts about the validity of claims regarding the proposed method’s effectiveness. Even the example provided around L321 in Section 3.3.3 does not seem intuitive enough. Could you elaborate more?\n\n4. In Section 2, L143, the authors state, \"Differently, we introduce a plug-in-plus method called Disentangled Inversion Control to separate branches, achieving superior performance with considerably fewer computational resources.\" Was this claim tested thoroughly in the paper? From Table 7, the computational cost of the proposed method appears to be higher than that of the baseline methods (also, it seems that Null-Text Inversion is not included as a baseline).\n\n**Comments**:\n- In diffusion model literature, the terms ‘forward process’ and ‘backward process’ typically refer to the process from $z_{0}$ to $z_{T}$ and $z_{T}$ to $z_{0}$, respectively, even when dealing with inversions [6][7]. To minimize unnecessary confusion for readers, I recommend revising the current manuscript to maintain consistency with prior work in fundamental aspects. (In fact, in Appendix C, the authors use the term “forward guidance” naturally.)\n- In Figure 1, citations to MusicMagus should be included (for a self-contained perspective). There are instances of subjective terms like “tiny distance,” “small distance,” and “large distance” without clarification on what these distances pertain to. While the intent becomes clearer upon multiple readings, I suggest revisions to improve clarity, allowing readers to grasp the meaning on the first read-through. Additionally, the term \"two-branch inversion techniques\" does not appear to be a widely recognized term in inversion, I feel.\n- Missing explanations for indices $i, j, k$ in Section 3.3.1. Also, Eq (6) is not consistent with Eq (3), (4).\n- The values of hyperparameters such as $S, L, \\tau$ are not explained in the experiment section.\n- Section 3.4, L356–L358: Citing only image-editing literature while discussing audio/music editing seems wired.\n- In Appendix C, content from L881 is repeated.\n\n[1] Gui, A., Gamper, H., Braun, S. and Emmanouilidou, D., 2024, April. Adapting frechet audio distance for generative music evaluation. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1331-1335). IEEE.\n\n[2] https://github.com/v-iashin/SpecVQGAN\n\n[3] Chen, H., Xie, W., Vedaldi, A. and Zisserman, A., 2020, May. Vggsound: A large-scale audio-visual dataset. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 721-725). IEEE.\n\n[4] https://github.com/v-iashin/SpecVQGAN/issues/13\n\n[5] https://github.com/haoheliu/audioldm_eval\n\n[6] Song, J., Meng, C. and Ermon, S., 2020. Denoising diffusion implicit models. ICLR 2021\n\n[7] Parmar, G., Kumar Singh, K., Zhang, R., Li, Y., Lu, J. and Zhu, J.Y., 2023, July. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings (pp. 1-11)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "To improve the music editing performance of DDIM inversion, the authors did not simply combine Cross-attention control and Mutual self-attention control; they introduced an additional Harmonic Branch to integrate these techniques. Furthermore, they proposed the Disentangled Inversion Technique. By leveraging these methods, they surpass existing music-editing methods in both objective and subjective metrics.\n\nOriginality/Contribution:\n- Introduction of the Harmonic Branch and Disentangled Inversion Technique for DDIM inversion\n- Proposal of ZoME-Bench" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose MEDIC, a training-free music editing method utilizing pretrained diffusion models. MEDIC extends DDIM inversion to enable better music editing. Specifically, it achieves this by first obtaining the noise latent $z_{T}$ through standard DDIM forward sampling. During reverse sampling, MEDIC incorporates Cross-attention control, as proposed in Prompt-to-prompt, and Mutual Self-attention control, as proposed in MasaCtrl, while introducing \"Harmonic Branch\" for integrating Cross-attention control and Mutual Self-attention control.\n\nAdditionally, authors propose Disentangled Inversion Technique. This approach focuses on the difference between the latent $z^{*}_{t}$ obtained during DDIM forward sampling, and the source latent $z^{src}$, to guide the reverse sampling.\n\nAlongside the MEDI, authors also introduce a new benchmark dataset, ZoME-Bench, designed specifically for music editing evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Overall**:\n\nThe following points represent the overall weaknesses in the current manuscript. Please refer to the detailed explanations in the latter part of the Weaknesses and Questions sections.\n1. Insufficient or unclear validation of the effectiveness of the proposed method, which is directly related to the originality of this work. (For more details, see A. in Weaknesses and 1. in Questions.)\n2. Unclear motivation for incorporating the inversion process (L3 in Algorithm 2) within the problem setup (where a source prompt $\\mathcal{P}$ is provided). (See more details at 2. in Questions.)\n3. The contribution of ZoME-Bench to the music-editing field seems somewhat limited. (For further details, see B in Weaknesses.)\n\n\n**Details**:\n\nA. The validity of the objective metrics used for evaluation remains unclear. For more details, please refer to Question 3.\n- Given the ambiguity of these objective metrics, the experimental justification for the advantages of using the Harmonic Branch and introducing the Disentangled Inversion Technique seems insufficient.\n- On the other hand, the benefits of combining Prompt-to-prompt and MasaCtrl appear to be adequately validated in subjective evaluation. However, this aspect alone may not be sufficient to fully support the originality and strengths of this work.\n\nB. While I agree with the importance of introducing standardized benchmarks in audio/music editing and appreciate the effort to create a dataset, ZoME-Bench still has some limitations. ZoME-Bench includes original audio samples, original prompts, editing prompts/types/instructions, etc., but it lacks edited audio samples that define how the source samples are supposed to be edited in a certain sense. In this respect, although ZoME-Bench contributes to standardizing editing instructions, it leaves unresolved the larger issue of verifying the edited results, which remains a significant challenge in audio/music editing evaluation. Therefore, while ZoME-Bench contributes to the audio/music-editing benchmark to some extent, its impact is limited.\n(I understand how difficult it is to construct such edited audio samples. I mention this point to assess and clarify the degree of the contribution and originality of the ZoME-Bench proposal.)\n\nC. To improve the paper's presentation quality, I recommend revising the current manuscript. (See Comments in Questions.)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Most of my questions are brought up throughout the previous section. In general, there are a number of what I think are typos (but I may be misunderstanding things), as well as my questions regarding the definitions of \"rigid\" and \"non-rigid\", which acronym-like terms refer to what, and questions regarding baseline reproduction and comparison." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed method overall seems reasonably novel and well-motivated. Much space is given to explaining the facets of their method, and graphical comparison to existing methods like MusicMagus is very appreciated.\n- Ablations of proposed method are solid and thorough, and shows clear strengths to the design choices the authors made." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents MEDIC, a novel method for training-free editing of musical audio with freeform text using pre-trained text-to-audio diffusion models. The paper also presents ZoME-Bench, a new editing benchmark for musical audio editing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall, while the proposed method is solidly novel and seems to perform better than current SOTA training-free editing approaches, issues in the overall clarity of the paper, evaluation suite, and in particular the proposed benchmark overweigh the contributions and thus I recommend rejection.\n\n\n\n# Overall Clarity\n---\nThe paper contains a number of grammatical errors, incorrect names of things, and incorrect citations.\n\n- The “Branch” term (line 070) is introduced without explanation.\n- \"rigid\" and \"non-rigid\" edits are introduced without explanation. These terms are not elucidated by figure 1, as there seems to be no difference between the input prompt and the \"non-rigid\" prompt.\n- (Line 143) What is “plug-in-plus”? Is this a typo for plug-and-play?\n- (Line 369) What is ZoMo-bench? Is this a typo from ZoME-Bench?\n- It should be “MedleyDB” not “MelodyDB” (line 370)\n- Appendix C seems to contain the same information duplicated twice (874-880 and 881-892)\n- MusicCaps has the incorrect citation on line (161), which should point to Agostinelli et al.\n\nWhile the above issues are minor, more importantly there is a distinct lack of clarity in the methodological contributions and what are contributions from the authors.\n- In particular, it feels like the paper suffers from overuse of acronym-like terms. Between Disentangled Inversion Control, Harmonized Attention Control, and Disentangled Inversion Technique, it is hard to tell what is a subset of what.\n- In section 3.3.1. even more acronym-like terms are introduced (Global Attention Refinement and Local Attention Blend), and it is unclear both A) why are these getting special names if they are from existing work where these terms are not used and B) if these are novel contributions, this is not made explicit.\n- In Equation 2, there is a lone $M_t$, and it is unclear which attention map this refers to\n- In equation 3/4, the threshold functions takes $w_{src/tgt}$ as argument but in equation 6 it is only a function of the mask and the threshold\n- Both 3.3.1. and 3.3.2 are realtively hard to follow without an intimate knowledge of past (image-domain) works. These sections as a whole could be made more clear by drawing specific examples to editing tasks.\n- 3.3.3 (and algorithm 2), it is a bit unclear whether “forward” refers to the forward diffusion process (i.e. data $\\rightarrow$ noise) or the “forward” pass of the model. I think it is the latter, and if so I think this nomenclature should be fixed to the “backward” step of the diffusion process to bring it in line with standard diffusion terminology.\n- Paragraph 2 continuously refers to some caption include “with noise”, but this does not exist anywhere\n- What is MEDIC? It is sometimes used as (what I can infer) to be the main method, but this is never stated and it is unclear what this refers to specifically.\n\n# Evaluation Suite\n---\n - A key hyperparameter in all these editing-through-inversion methods is the $T_{start}$ parameter, which should in theory determine overall edit strength. It is unclear how this hyperparameter is chosen for the proposed method (and admittedly it may be ignored with using all 200 steps), but it is unclear how it was chosen for the baseline comparisons (such as DDPM-friendly).\n- As DIC is a text-based editing method, it is unclear how the MusicDelta section of MedleyDB was used for the task. If the authors used the prompts generated from Manor and Michaeli, this should be explicitly stated. In general, it is unclear what sorts of edits this dataset even contains other than that the samples are longer.\n- LPAPS results seem odd when comparing to Manor and Michaeli, as the values for table 2 are in theory on the same dataset, yet are all considerably lower (by about 10x) than the values reported in Manor and Michaeli. Similar inconsistencies hold for FAD, and in general it is unclear why the results for DDPM-friendly are different from the original paper (as supposedly this is the same data).\n- Standard error bars should be reported for the subjective listening results, as all the values (with the exception of SDEdit) are quite close, and it is thus unclear whether the differences are statistically significant. In particular, it is also not stated how many subjects they used for the listening test and how many samples of each editing task were tested, as statistical significance should be calculated on average scores for each stimulus aggregated across users to assess whether such results would extrapolate to additional results from the proposed method.\n- FAD is a rather odd metric to be using in this context with paired samples, both because A) it is an unpaired, distributional metric that ignores the structural relevance between pairs of samples and B) FAD has documented clear instability in small sample sizes for estimating the covariance matrix of each Gaussian (with generally needing 2.5k-5k+ samples to estimate this reliably [1]), thus making results somewhat unstable given ZoME-Bench’s size of only 1100 samples. Other metrics such as CLAP Maximum Mean Discrepancy (MMD) could be better suited, but in general it would make more sense to compare the FAD of generated samples to some existing reference set (such as Song Describer), as it is really only a measure of audio realism than anything on similarity to reference signals (which the text should reflect).\n- The argument in the “Qualitative Results” section is reasonably heuristic. From visually inspecting the spectrograms, it is not clear of any structural relevance to the original source audio, and simply saying “the change of ___ can be seen in the Mel-spectrum” is insufficient to point to anything meaningful (though admittedly, the utility of spectrograms as a visualization here is not great). However, I think the overall success of the proposed method is somewhat overstated, as most of the examples provided in the **audio demo samples** do not actually perform the target edit and preserve the non edited content at the same time, with most samples sacrificing one of these two facets (which the authors identify as the most important parts of the editing process).\n\n# Proposed Benchmark\n---\nMy biggest issue with the paper is the proposed benchmark dataset of ZoME-Bench, as it seems to contain a number of logical flaws that severely limit its utility as a public benchmark.\n- It is odd that as the first “editing benchmark”, there is no actual ground truth information about the correctness of the edit itself. If it is only being assessed by CLAP score, this implicitly assumes that 1-2 word changes in the CLAP text prompt return meaningful changes in the input embedding in order to assess this change, which is assumed without support here. One could imagine here that in a truly suboptimal sense, a model could simply prioritize an edit that does absolutely nothing to the source audio but is able to change the CLAP embedding of the output, which would theoretically achieve perfect results on the benchmark. As the benchmark is a core contribution of the present work, the authors should either have ground truth target audio samples for each source audio (which would be easily doable for some of the instrument tasks if one had source-separated tracks), and/or at least followed the growing standard practice [2] in editing evaluation and use pretrained discriminative models to help assess the edit fidelity of more fine-grained tasks, which is fully doable in this case (such as using instrument tagging models for edits 0/1/2/8 or genre classifiers for edits 3/4).\n- Many of the tasks are similar to previous work (AUDIT / InstructME) in being source separation tasks (0/1/2/8), and thus a much more natural choice for this benchmark would include source separated tracks in order to actually assess these edits that have real ground truth answers. While it is still unclear where the text captions came from for the MedleyDB subset (i.e. if they came from the Manor and Michaeli), it is odd that the MedleyDB subset was not used for creating the benchmark, as it seems readily available and has separated tracks for the instrument-based tasks, thus giving a possible avenue for ground truth targets.\n- The paper in particular is missing a rigorous definition of what “rigid” and “non-rigid” mean in the context of text-based audio editing, and why they have deemed certain editing tasks in one category or another. For the rest of my point here, I assume rigid means “content-based” and non-rigid means “style-based,” as that is what the paper seems to imply and is inline with past image domain works. For example, it is unclear why “instrument change” is referred to as a non-rigid task, given that in theory the change of an instrument should preserve all harmonic and melodic content and only reflect timbral changes (as it would be if a guitar player was changed to a violin but played the same part). Unlike in image domain tasks (where edits can mostly be grouped into those than edit a particular masked / bounding box region of the source image vs. ones that make global edits), this notion of region-concept cooccurrence does not exist in audio and thus porting over the definitions of “rigid” and “non-rigid” is not applicable out of the box.\n- In general, I think a number of the tasks proposed do not make for an informative benchmark. Tasks 3/4/5/6/7 (genre/mood/rhythm/background/melody) are ill-defined, as these conceptual buckets do not disentangle content vs. stylistic changes in the audio, and seem to be rather divorced from how actual musicians talk about semantic changes in music. As examples:\n - For genre, if “blues” changes to “rocks” then what changes? Is this a reflection of content changing (such as chords being simplified and melodic lines using fewer blues scales) or of stylistic changes (micro-timing, ornamentation, guitar fx)?\n - If “fast” is changed to “slow”, should the entire content be slowed down (thus reflecting a content-based change that can only be seen as “stylistic” if realignment is assessed between the original and now slower edit) or is this just a measure of decreased density of perceptual onsets?\n - If a “relaxing” melody is changed to a “cheerful” one, does this reflect changes in the pitch values, the rhythmic interpretation, both, or neither?\n- While this is somewhat up to subjectivity, none of these semantic tasks seem non-rigid to me (as they all involve some amount of content preservation with stylistic changes). If these are meant to allow content-based changes, this should be explicitly stated, and in general, I’m hesitant to even phrase such hypothetical tasks as “edits” in the first place (as something has to stay the same for it to be considered an edit).\n\n\nBetween the issues with the non-rigid tasks, lack of ground truth for the rigid tasks, and over-reliance on CLAP similarity as a measure of edit accuracy, the overall use of this benchmark for standardizing and improving music editing is quite limited. To improve the paper, I think that either completely focusing on the MedleyDB subset and mostly dropping the proposed benchmark from the paper (as the methodological contributions stand on their own) or performing the significant work to improve the benchmark (and/or justifying why CLAP similarity can be used as a ground truth signal so heavily) would both be valid options, as the present version of the benchmark is my main concern.\n\n[1] Jayasumana, Sadeep et al. “Rethinking FID: Towards a Better Evaluation Metric for Image Generation.” 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023): 9307-9315.\n\n[2] Basu, Samyadeep et al. “EditVal: Benchmarking Diffusion Based Text-Guided Image Editing Methods.” ArXiv abs/2310.02426 (2023): n. pag." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce Disentangled Inversion Control to support fine-grained zero-shot music editing." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024medic,\ntitle={{MEDIC}: Zero-shot Music Editing with Disentangled Inversion Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3f8556SIEn},\nnote={under review}\n}" }, "abstract": { "value": "Text-guided diffusion models make a paradigm shift in audio generation, facilitating the adaptability of source audio to conform to specific textual prompts. Recent works introduce inversion techniques, like DDIM inversion, to zero-shot editing, exploiting pretrained diffusion models for audio modification. Nonetheless, our investigation exposes that DDIM inversion suffers from an accumulation of errors across each diffusion step, undermining its efficacy. Moreover, existing editing methods fail to achieve effective complex non-rigid music editing while maintaining essential content preservation and high editing fidelity. To counteract these issues, we introduce the Disentangled Inversion technique to disentangle the diffusion process into triple branches, rectifying the deviated path of the source branch caused by DDIM inversion. In addition, we propose the Harmonized Attention Control framework, which unifies the mutual self-attention control and cross-attention control with an intermediate Harmonic Branch to progressively achieve the desired harmonic and melodic information in the target music. Collectively, these innovations comprise the Disentangled Inversion Control (DIC) framework, enabling accurate music editing while safeguarding content integrity. To benchmark audio editing efficacy, we introduce ZoME-Bench, a comprehensive music editing benchmark hosting 1,100 samples spread across ten distinct editing categories. This facilitates both zero-shot and instruction-based music editing tasks. Our method achieves unparalleled performance in edit fidelity and essential content preservation, outperforming contemporary state-of-the-art inversion techniques. Audio samples are available at https://MEDIC-Zero.github.io. Both code and benchmark will be released." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Zero-shot Music Editing", "Inversion Techniques", "Attention Control" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7b62071693776b0dc9b275febbe8f02e0794bae9.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MEDIC: Zero-shot Music Editing with Disentangled Inversion Control" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3fGtV4Zfgq
Fast training and sampling of Restricted Boltzmann Machines
main
Active
Restricted Boltzmann Machine;Fast Sampling;structured data learning;training algorithm
generative models
3;3;3;5
4;5;3;4
2;2;2;3
2;3;2;3
2;3;2;3
3.5
4
2.25
2.5
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "*** introduction\n\nWhilst the RBM is well known, it would be helpful I feel for a reader to have the definition of the model earlier in the text. It currently isn't defined until near the end of page 4. Please introduce the RBM formally earlier in the text.\n\nNotation: inconsistent use of $N_v$ and $N_{\\text{v}}$ throughout, similarly for $N_h$.\n\nEquation 1: it might be better to write W_{i\\alpha}, rather than w_{i\\alpha} since w is used later for the \"singular values\".\n\n*** page 2\n\nFigure 1 isn't very easy to parse. For example the panel on race is placed more in the Mickey column than the human genome column.\n\n*** page 5\n\nPlease clarify the difference between \"model averages\" and \"observable averages\" and the difference between using N_s independent MCMC processes and R parallel chains.\n\nPlease clarify for the reader the meaning of <v_ih_a>_D\n\nSection 4: It is not correct that it is possible to train \"exactly\" an RBM with a reduced number of modes. Approximations are required, as explained in the supplementary material.\n\nPlease state what the free parameters to learn are in equation 3. If u and \\bar{u} are the singular directions, then the free parameters would be w_\\alpha? \n\nIn general I found the description of the low-rank approach unclear and this important section needs work to make it simpler and more clear to the reader.\n\nFor figure 14 it would be useful to show the distribution of the PCA projected data to see how well the RBM matches the projected data distribution.\n\nIt's unclear to me what contribution the authors are claiming to make. They state that the learning of the low rank parameterisation of W has been done before. Please clarify what the contributions of the paper are.\n\n\n*** Section 5\n\nI find it hard to follow why the authors are considering different sampling schemes and therefore what the aim of this section is. I presume this is considering alternative sampling approaches after the low-rank pre-training has been applied. However, I struggle to follow a clear recommendation or conclusion as to which method might be more suitable.\n\n*** Section 6\n\nIn the conclusion the authors claim to have introduced a method that enables \"precise computation of log-likelihood\". I cannot see anything in the main text that relates to this. There is no experiment I can see that measures the quality of the log-likelihood approximation. Please give some evidence to support this assertion.\n\n\n*** Supplementary material\n\nThe use of the term \"mode\" isn't very clear. The phrasing suggests that the first d modes of the maximum likelihood trained RBM should correspond to the d \"modes\" of the PCA solution. I'm not sure I know what this means. What are modes of a PCA solution?\n\nThe notation \\hat{u} is confused with \\bar{u}.\n\nWhy use $w$ here whereas $W$ is used in the main text?\n\nThe derivation is quite confusing. For example the dependence on \\bar{u} in equation 7 disappears without explanation. Indeed \\bar{u} seems to be never properly defined.\n\nPlease state clearly what are the parameters of the model that are being learned.\n\nSection A.2. The claim as before of exact training is incorrectly made here.\n\nThe notation in equation 20 is confusing, such as w_{\\alpha,a}=\\sum_i w_{ia}u_{i\\alpha} -- are arabic and latin indices meant to indicate referencing a different entity, even though both objects are labelled w?\n\nIn general I find the supplementary material confusing. I believe it is trying to fit an RBM projected to the d-dimensional subspace defined by PCA of the data to the empirical data distribution in that same subspace. However, approximations are clearly required in order to compute the projected RBM distribution. Given that, for a very low dimension d then one can easily discretise the model and carry out a simple maximum likelihood fit. If that is what is being done, it is not well explained and rather misleading (since this requires approximations itself).\n\nAn alternative (and standard) way to compute the marginal p(m) is to use the integral (Fourier) representation of the Dirac delta function. This means that the summation over v can be then carried out exactly, leaving only a d-dimensional integral to exactly compute p(m). This can also be carried out using discretisation for small d. The authors are (as I can understand) also using discretised integrals, so I'm unclear why they don't employ the standard Fourier Delta representation approach to compute p(m) -- this would seem to involve less approximations that the approach the authors consider." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "RBMs are an important model and finding appropriate ways to train them is a topic of significant interest. The paper highlights the phenomenon of critical slowing down and how pre-training the model with a low-rank approximation of the parameter matrix can help the model overcome some of the slowing down effects." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper discusses approximations to train a restricted Boltzmann machine (RBM). The first is to pre-train the RBM by fitting a constrained (low-rank) form of the RBM to the low-dimensional PCA space of the data. This can help with finding a good initial solution. After this various MCMC approaches are considered to continue training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper suffers from a lack of clarity of presentation and lack of clarity of novelty.\n\nThe paper mentions that the idea of a low-rank approach has already been used by others and it's unclear to me what novelty there is in any of the sampling approaches used after the pre-training phase.\n\nIn terms of presentation, there are notational inconsistencies and a general lack of clarity in terms of the main ideas. Fundamentally the approach of fitting a constrained model seems straightforward and indeed I believe there is a simple way to compute the projected distribution in the PCA space (using the Fourier integral representation of the Dirac delta function) which the authors do not discuss." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "## Questions, Small comments and typos\n- Would it be possible for the authors to provide a sketch and pseudocode for their PTT algorithm as a standalone and in comparison to PT? This would be very helpful to get a better understanding of the contribution of this work. \n- Is there any intuition behind the bump observed in Figure 3 at around $10^3$ gradient updates (left and middle plot).\n- Layout: there's a problem with Figure 2. The x-axis is sometimes completely or partly cut. I strongly recommend carefully checking this, aligning the plots, and making sure such problems are removed. \n- In general the authors often refer to the Appendix as SI (I assume Supplement Information). I guess this acronym has not been defined anywhere. I identify its first occurrence in line 96. Perhaps the authors can define what SI is or, alternatively just all it appendix. \n- Line 235: I'd recommend adding a reference for critical slowing down. This comment applies to earlier occurrences of this concept.\n- Line 459: grew -> grey\n- Line 512: Banos et al. (2010) might need to be wrapped in parenthesis \\cite -> \\citep" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written and easy to follow. \n- It represents a pleasant read that is accessible to a broad audience. \n- The literature review and related work section read well and are exhaustive.\n- The idea of pre-training the RBM to encode the principal components is simple yet very effective. \n- Leveraging the analogy between critical slowing down and the struggle of RBM during training to be ergodic and discovering all modes of the distributions is elegant and intuitive (though I suppose this is not a novelty of this paper, it is very nicely pictured in the introduction). \n- The numerical experiments look solid and aligned with the theoretical insights given in the main text. \n- I have not thoroughly checked the mathematical details in the appendix, but at first glance, they look good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The claimed novelties of this work are twofold. \nFirst, this paper proposes low-ranking training of RBMs by directly encoding the principal components throughout a convex-optimization process. This pre-training component proves to be very efficient when data are particularly clustered. In such cases, target densities are highly multimodal, and the model struggles to \"discover\"all the modes from scratch during training without the pre-training phase. This autonomous discovery of new modes is often associated with second-order phase transitions, similar to systems from statistical mechanics, where critical slowing down prevents the discovery of all modes in finite time efficiently. \n\nAs a second contribution, the paper also investigates how to use a variation of parallel tempering (PT) algorithms, termed parallel trajectory tempering, to sample more efficiently and obtain log-likelihoods estimates. In simple terms, parallel trajectory tempering (PTT) essentially relies on the same idea of parallel tempering of swapping between models at different temperatures using the Metropolis rule (and therefore retaining detailed balance). However, differently from PT, PTT swaps a full set of parameters $\\Theta^t$ instead of the temperature $\\beta$ only. In that sense, it can be thought of as a generalization of PT. \n\nNumerical experiments in Fig. 2 prove the pre-trained low-rank RBM to be more capable of identifying all modes in highly clustered data, while Figs. 3-4 show that PTT allows more accurate loglikelihood estimation and faster yet more efficient sampling from all modes of distribution compared to standard alternate Gibbs sampling (AGS)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I find it a bit challenging to identify the two main contributions in the paper as those are totally disentangled in their presentation between Sec. 4 and Sec. 5.2. I strongly recommend adding a list of bullet points at the end of section 1 to clearly list the contributions of work and crossref to the corresponding point in the paper. This would substantially help navigate the paper. \n- I find that the structure of sections 5.2 and 5.2.1 can be improved. In particular, I find it confusing that Parallel Trajectory Tempering is introduced in section 5.2, and Parallel Tempering approaches are discussed in section 5.2.1. I find this logically inefficient as I believe that a more natural yet easier-to-follow flow would be to first introduce Parallel Tempering approaches and then explain what makes PTT different compared to existing approaches from the literature. As this is one fundamental contribution of this work I believe it is crucial to rework these sections such that the actual novelty emerges more clearly from the discussion. \n- The discussion around eq. (4) is rather crucial for the paper as it represents one of the main contributions of this work. Currently, the novelty with respect to Decelle and Furtlehner (2021a) is not very clear to me, and I would appreciate it if the authors could elaborate more on this. Moreover, what's the intuition behind the \"magnetizations\" along each of the singular vectors? Is there any correspondence with the magnetization as a physical observable? As far as I understand, those should be the projections along the unitary vectors of the visible variable. Is that correct? If all my understanding is correct, then the new contribution of this work is to use a bias initialization along a direction $\\boldsymbol{u}_0$, which augments the dimensionality of the system by one in the bias direction. If all above is still all correct, I wonder the following:\n - How beneficial is to have such an augmented direction for the bias compared to the naive approach proposed in Decelle and Furtlehner (2021a)?\n - Have the authors conducted any ablation studies to compare the differences in performances between Decelle and Furtlehner (2021a) and their new approach from an empirical standpoint?\n\nThis latter point is crucial in assessing the effective novelty of this work. At the moment the reason for the lower score is primarily due to my perception of limited novelty. I am more than happy to discuss this with the authors during the rebuttal and revisit my score upon clarification of my concerns above (and below, see, e.g., the first bullet point in the **Questions** cell)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Does critical slowing down occur in the energy-based model when the hidden variables are traced out, or does it occur in the joint distribution that includes the hidden variables? If the phase transition occurs in the joint measure, does the traced-out distribution also exhibit a phase transition?\n- What is the definition of $\\bar{u}$?\n- Could the authors provide a detailed derivation of Equation (4)? The terms $\\bar{u}_{a}$ and $\\eta_{a}$ are currently undefined.\n- The phrase \"a direction $\\bar{u}_0$ is used for the magnetization $m_0$ that is only present in the bias term\" is unclear. Could you explain this in more detail?\n- Is it possible to learn DBM without pre-training using the pre-training with weights introduced by [1] ?\n\n[1] Yuma Ichikawa and Koji Hukushima, Statistical-mechanical Study of Deep Boltzmann Machine Given Weight Parameters after Training by Singular Value Decomposition." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper offers a novel contribution by proposing a pre-training technique and a new sampling approach for RBMs inspired by their thermodynamic properties. This builds on the existing theoretical analyses of RBMs.\n- To my knowledge, extending replica Monte Carlo methods to a learning trajectory is original and intriguing.\n- Including a specialized physics background in the Appendix makes the paper accessible even to readers without a physics background." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This research proposes an efficient training approach for structured data in RBMs by employing pre-training based on simple convex optimization, which significantly facilitates learning for structured datasets. Furthermore, the study introduces a novel sampling and log-likelihood evaluation method that leverages the model's learning process, differing from conventional Parallel Tempering." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The distinction between theoretical claims and empirical findings is not clear. It would be beneficial for the authors to clarify which parts of the study are based on theoretical analysis and which are supported by numerical experiments, particularly in the context of related work. For instance, the first- and second-order phase transition claims pertain to equilibrium properties. However, it is unclear how these phase transitions are justified when updating parameters with limited samples. \n\n- In Section 4, the paper introduces pre-training for low-rank RBMs with singular value decomposition (SVD)--based weights, aiming to avoid continuous phase transitions (second-order transitions) as structural patterns gradually emerge. It is further claimed that training can proceed quickly using the PCD method after post-pre-training. Could the authors provide a more detailed explanation for this intuition? Even if second-order transitions are avoided, if there are multiple stable clustered states, capturing multiple modes with the PCD method may be challenging and could introduce bias in the estimation. However, the paper claims, \"Once the main directions are incorporated, training can efficiently continue with standard algorithms like PCD, as the mixing times of pre-trained machines tend to be much shorter than at the transitions.\" I believe that simulating clustered models with simple PCD often results in impractically long mixing times. Indeed, in Section 5.2, it is argued that mixing is very slow for AGS in clustered data.\n\n- The statement \"It’s also often ineffective with highly clustered data due to first-order phase transitions in EBMs, where modes disappear abruptly at certain temperatures, as discussed by Decelle & Furtlehner (2021a)\" suggests that using PT becomes challenging because the learned RBM exhibits a first-order transition at specific temperatures. However, does the existence of a first-order transition in the learned RBM typically occur regardless of the statistical model being learned? For example, if learning a model without a first-order transition, such as the Ising model without a local field, does a first-order transition still arise in the learned RBM? This seems somewhat nontrivial.\n\n- In the phase diagram of A. Decelle’s Thermodynamics of Restricted Boltzmann Machine and Related Learning Dynamics does not appear to be a first-order transition, and the AT line may suggests continuous phase transitions dominated by Full-step RSB. Thus, the claim regarding first-order transitions requires further elaboration. If a first-order transition is present, it would be essential to validate this by examining the free energy from the equilibrium state of the learned model, which could likely be accomplished by evaluating the partition function using the proposed method.\n- If a first-order transition does exist, then the exchange probability in PT would approach zero near the transition. Has this phenomenon been observed? Additionally, it would be helpful to evaluate the round-trip rate of PT and PTT.\n- While it is argued that preparing models at different temperatures is challenging for PT, it should be noted that the proposed approach also requires storing models during the learning process.\n- The CelebA data in Figure 2 appears to be truncated.\n\nBecause the high performance has been verified numerically, the score can be raised if the above statement is cleared." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why didn't you apply this to larger problems?\n\n2. What are situations where the pre-training fails?\n\n3. Is PTT useful for generating samples during training, using only earlier parts of the training run?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea of low-rank pre-training is interesting and seems like it could be useful if it scaled up.\n\n2. The idea of doing AIS across the training run is creative and clever.\n\n3. Parallel tempering across training steps seems new." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies algorithms for the training of Restricted Boltzmann Machines (RBMs). It argues that \"highly structured\" data require different algorithms than those that have been successful for, e.g., image datasets. There are three algorithmic ideas that are discussed: 1) Pre-training an RBM using an \"exact\" procedure that produces low-rank weight matrices; 2) Estimating log-likelihoods using annealed importance sampling across steps of a training run; and 3) Using parallel tempering for sampling, again using different steps of training. Evidence for the efficacy of these procedures is provided in the form of curves from training runs on a few small datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I think this paper has a somewhat limited audience. It mostly builds upon work from a small group of authors, using language most familiar to that community. (For example, one person's work is cited thirteen times in the references.) A significant amount of jargon is used that keeps this from being a readable stand-alone paper. This is coupled with heuristic explanations for things that appear to rely on sharing the particular statistical mechanical point of view of this subcommunity.\n\n2. Much of the motivation for the work centers on \"highly structured\" data, which is not defined clearly. The authors indicate that this corresponds to the existence of clusters. The paper does not show examples of the methods succeeding or failing in the presence of this structure. For example, the Celeb-A dataset is given as an example of a dataset in which there are not clusters and so it is not \"highly structured\". However, Figure 2 does not seem to show us that this matters for the pre-training procedure. Figure 15 is\nsimilar. Why does one conclude that the bottom row of Fig 2 and Fig 15 are significantly different from what we see in the top row of Fig 2?\n\n3. The main text is highly verbose, with most of the actual concrete content being in the appendices. I don't think anything novel is introduced until page six.\n\n4. I find it difficult to appreciate precisely what the contribution of Section 4 is. As I understand it, the insight is \"do Decelle & Furtlehner (2021a) before you do PCD\". This is useful information, but between this section and Appendix A, I'm not sure where the boundary is between this and D&F (2021a).\n\n5. While the ideas of section 5 are interesting and Figure 3 is intriguing, the empirical results are at the level of \"preliminary findings\" on a single small problem. Even with the vastly smaller compute resources of 15 years ago, RBM researchers were studying larger problems.\n\n6. The title is too broad relative to what the paper delivers.\n\nTypos:\n - L161-162: \"two slow\"\n - L478: \"exchanges parameters\" but I think you mean \"exchanges configuration\".\n - L775-776: \\bar{u} vs \\hat{u}.\n - L836-837: \"gradient is convex\" -- surely you mean the training objective is convex in the parameters." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We design a new training pipeline for the Restricted Boltzmann Machine, capable of training quickly and accurately on hard multimodal dataset." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024fast,\ntitle={Fast training and sampling of Restricted Boltzmann Machines},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3fGtV4Zfgq},\nnote={under review}\n}" }, "abstract": { "value": "Restricted Boltzmann Machines (RBMs) are effective tools for modeling complex systems and deriving insights from data. However, training these models with highly structured data presents significant challenges due to the slow mixing characteristics of Markov Chain Monte Carlo (MCMC) processes. In this study, we build upon recent theoretical advancements in RBM training, focusing on the gradual encoding of data patterns into singular vectors of the coupling matrix, to significantly reduce the computational cost of training (in very clustered datasets) and evaluating and sampling in RBMs in general. The learning process is analogous to thermodynamic continuous phase transitions observed in ferromagnetic models, where new modes in the probability measure emerge in a continuous manner. Such continuous transitions are associated with the critical slowdown effect, which adversely affects the accuracy of gradient estimates, particularly during the initial stages of training with clustered data. To mitigate this issue, we propose a pre-training phase that encodes the principal components into a low-rank RBM through a convex optimization process. This approach facilitates efficient static Monte Carlo sampling and accurate computation of the partition function. Furthermore, we exploit the continuous and smooth nature of the parameter annealing trajectory to achieve reliable and computationally efficient log-likelihood estimations, enabling online assessment during the training process, and to propose an novel sampling strategy termed parallel trajectory tempering that outperforms previously optimized MCMC methods.\nOur results demonstrate that this innovative training strategy enables RBMs to effectively address highly structured datasets that conventional methods struggle with. Additionally, we provide evidence that our log-likelihood estimation is more accurate than traditional, more computationally intensive approaches in controlled scenarios. Moreover, the parallel trajectory tempering algorithm significantly accelerates MCMC processes compared to existing and conventional methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Restricted Boltzmann Machine", "Fast Sampling", "structured data learning", "training algorithm" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/604a2a98709731bdad90a5691fcbddf35774680c.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Fast training and sampling of Restricted Boltzmann Machines" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3fGwTRRudc
FocalLens: Instruction Tuning Enables Zero-Shot Conditional Image Representations
main
Active
Conditional Image Representation;Instruction tuning;Contrastive Learning;Vision-Language Models
foundation or frontier models, including LLMs
3;5;5;6
4;3;3;4
3;3;2;3
2;2;2;3
2;3;3;3
4.75
3.5
2.75
2.25
2.75
-0.229416
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness section. I would like to increase my rating, if the proper justification of my questions will be given." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper focuses on the conditional visual representation of the images, through instruction tuning, which is a good motivation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a method called FocalLens, that is designed to improve the visual representation capability of the vision encoders through instruction tuning. The motivation of this method to focus on the specific part of the images, according to the conditions or instructions given. The authors have presented two variants of this method, (i) FocalLens-MLLM : builds upon LLaVA, and (ii) FocalLens-CLIP : builds upon CLIP encoders. The extensive experiments on retrieval tasks have shown superior performance over baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation of this paper for retrieval tasks through text instructions, is not a new concept. On the other side, the proposed architectures are similar to LLaVA, with just an addition of contrastive loss, that brings very minor novelty.\n\n2. It is not clear about which instructions are given during the inference of retrieval tasks? It would be better to provide those instructions. \n\n3. I don't see any difference between the textual conditioning of this work and composed image retrieval (CIR) task. As both are shown differently, what are the reasons behind that? A concrete explanation is preferable. \n\n I think Pic2Word [1] and SEARLE [2], which are focused on CIR tasks, should be one of the proper baselines for the retrieval experiments.\n\n4. LLaVA [3] should also be one of the baseline methods with the LLaVA feature variant of FocalLens.\n\n5. The overview of the apporach is easy to understand, but the overall presentation is not good as the paper lacks the clear explanation of technical details and experimental details.\n \n\n [1] Pic2Word: Mapping Pictures to Words for Zero-shot Composed Image Retrieval. Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, and Tomas Pfister. CVPR, 2023.\n\n [2] Zero-Shot Composed Image Retrieval with Textual Inversion. Alberto Baldrati, Lorenzo Agnolucci, Marco Bertini, and Alberto Del Bimbo. Zero-Shot Composed Image Retrieval with Textual Inversion. In ICCV, 2023.\n\n [3] Visual instruction tuning. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Advances in neural information processing systems, 36, 2024a." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I am concerned about the questions mentioned above. I am leaning towards borderline accept and hope the authors could address my concerns during the rebuttal." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-\tThe proposed method is well-motivated. The idea of using text instructions as conditions to extract features of interest for certain downstream tasks is intuitive and interesting. \n-\tThe paper is generally well-written and easy to follow.\n-\tThe experiments are extensive, covering a broad range of tasks including image-image retrieval, image classification, and image-text retrieval." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a conditional visual feature extraction method that focuses on the representation of specific aspects of the image described in the given text. Specifically, the authors leverage visual instruction tuning data to tune a pre-trained vision encoder in a contrastive manner, taking natural language instructions as additional inputs to produce conditional image representations. Experimental results on a wide range of downstream tasks demonstrate that the proposed method produces features of interest better than generic features produced by standard vision encoders like CLIP." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tWhile most results seem promising, some of them are not. For example, in Table 2, FocalLens performs worse than InstructBLIP on GeneCIS (Attribute). In Table 3, FocalLens performs worse than CLIP on Flower and Aircraft. In Table 7, compared with OpenAI ViT-L-14, FocalLens performs the same on Orientation and significantly worse on Structure. These results make me concerned about the actual effectiveness of FocalLens on certain conditions. Could the authors provide a justification on this?\n-\tThe authors use the visual instruction tuning data in LLaVA to train FocalLens models. It would be better to show how the number of visual instruction tuning examples affect the final performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the specific and concrete difference between the proposed method and the existing text-conditioning visual feature exaction method?\n2. What kind of broader tasks can be only solved by this proposed method? The contribution (compared with other similar or related works) needs to be highlighted during the rebuttal." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed method is validated on multiple downstream tasks for image representations and achieves consistent performance improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the FocalLens which is a visual feature extraction method that generates different image representations based on the specified context of interest, guided by natural language. By turning a pre-trained vision encoder with vision instruction tuning data, FocalLens emphasizes features relevant to the context, and aims to enhance performance on tasks like image retrieval and classification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed model is unlikely to outperform existing models significantly. The authors mentioned in Lines 139-141 that the proposed model is different from other conditioned vision models (e.g., LLaVA [1], also cited in the paper) because the proposed model can be applied in \"broad downstream use cases\". However, in the training setting, they use similar training data and settings as LLaVA. This is thus no validation for \"being able to do broader downstream tasks\".\n\n2. This paper misses the baseline that uses LLaVA features. From the reviewer's understanding, the proposed model looks like a submodule of LLaVA (by removing the language branch). That is, LLaVA is equal to the proposed method if including a text decoder. Currently, the advantage of this work compared with the LLaVA encoding features is unclear.\n\n3. The motivation of this paper is not convincing to the reviewer. In the related works (and some parts of the introduction section), the justification of the difference between existing works and this submission is not clear. The reviewer's understanding is that general MLLM aims to learn general representation, and existing conditional visual representation works aim for task-specific conditioning features. While, this submission aims to learn general conditioning features which might be somewhere between general features and task-specific conditioning features. Then, the question is what is the criterion to distinguish these three features? It is quite confusing what is going to be learned given that related works of conditioning features have been obtained in many existing works. In other words, what is the benefits of learning such in-between conditioning features/representations? It seems general but also specific to some conditions which are not clarified in this paper, given the validate datasets are just common ones.\n\n[1] Visual Instruction Tuning" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Does the training of FocalLens-MLLM still have the next-token-prediction loss based on cross-entropy?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Extracting visual features according to specific conditions, e.g., text instructions, is worth studying.\n2. Overall, this paper is well-written and easy-to-follow.\n3. FocalLens-CLIP appears to be effective.\n4. The motivation is clear, and the training pipeline of FocalLens-CLIP is reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces FocalLens which is able to produce different visual representations of the same image based on different contexts of interest. The authors leverage the instruction tuning data, which is usually in the triplet format of (image, instruction, output), to fine-tune MLLMs and CLIPs in a contrastive manner, resulting in FocalLens-MLLM and FocalLens-CLIP, respectively. Evaluations on various benchmarks demonstrate the effectiveness of the propose method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. FocalLens-MLLM is somewhat weird. This paper aims to produce context-aware visual features. However, there appears to be a discrepancy in the design, as the visual features produced by FocalLens-MLLM do not seem to be modulated by contextual instructions. Notably, the architecture does not incorporate instructions as inputs to the visual encoder. Consequently, this suggests that the visual features extracted remain invariant across different instructions. Could you explain in more detail how the instruction information modulates the visual features in FocalLens-MLLM? Is there a mechanism that allows the visual encoder to produce different representations based on different instructions?\n2. Equipping FocalLens-CLIP with standard MLLM training recipes seems to be an appropriate design. I am curious about the performance. Have you evaluated FocalLens-CLIP's performance on standard multimodal comprehension tasks compared to baseline models? Could you provide results or analysis demonstrating how the context-aware visual features contribute to improved multimodal understanding?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We leverage contrastive instruction tuning to train text-conditioned vision encoders that produce representations aligned with specific conditions of interest in a zero-shot manner." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024focallens,\ntitle={FocalLens: Instruction Tuning Enables Zero-Shot Conditional Image Representations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3fGwTRRudc},\nnote={under review}\n}" }, "abstract": { "value": "Visual feature extraction is fundamental to many vision tasks. Most existing methods extract visual features by encoding an image into a generic feature vector. However, an image naturally contains rich information, and there may be multiple perspectives to describe it. For each application, we might be interested in different aspects of an image and want to prioritize those features over others. For instance, in an image of a dog carrying a toy, if we are primarily interested in the dog, we would expect the extracted features to emphasize the dog over the toy. In this work, we introduce FocalLens, a conditional visual feature extraction method that produces different representations for the same image based on the context of interest, expressed flexibly through natural language. We leverage vision instruction tuning data and contrastively tune a pretrained vision encoder to take natural language instructions as additional inputs and produce conditional image representations. Extensive experiments validate that conditional image representation from FocalLens better pronounce the visual features of interest compared to generic features produced by standard vision encoders like CLIP. In addition, we show FocalLens further leads to performance improvements on a range of downstream tasks including image-image retrieval, image classification, and image-text retrieval, with an average gain of 5 and 10 points on the challenging SugarCrepe and MMVP-VLM benchmarks, respectively." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Conditional Image Representation", "Instruction tuning", "Contrastive Learning", "Vision-Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b759b71bf08f87a4eaefaf909c27108ece025510.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "FocalLens: Instruction Tuning Enables Zero-Shot Conditional Image Representations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3fl1SENSYO
Unleashing the Potential of Diffusion Models for Incomplete Data Imputation
main
Active
Diffusion models;missing data imputation
generative models
5;5;8;8
4;4;3;3
2;2;3;3
2;3;3;3
3;3;3;3
6.5
3.5
2.5
2.75
3
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In biology, missing values are often represented as 0 (or another \"limit of detection\" (LOD) value), making it difficult to distinguish between actual LOD values and data missing at random (which can comprise 30% of data in cases like proteomics and single-cell analysis). Do you have any ideas about how this problem could be addressed? Note that the fraction of missing values might be known and could potentially be conditioned on." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper is well-written with a clear, thorough, and concise introduction that effectively summarizes key points from previous works\n* The authors specifically address the challenges of the problem and provide clever solution to mitigate them\n* The paper's main novelty is supported by theoretical proof\n* The evaluations are comprehensive, with thorough and convincing ablation studies" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the imputation of data missing completely at random (MCAR) in tabular data, handling both continuous and categorical variables. The authors propose an EM procedure with a conditional diffusion model for imputation, featuring a novel adaptation of the annealing process for better conditioning on observed values. The paper demonstrates strong results across multiple datasets in comparison with leading methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* A major concern regarding evaluations - while the paper claims to use a single hyperparameter setting throughout, it's unclear how hyperparameters for other methods were selected and their sensitivity to these HP. For me, this concern significantly impacts the overall assessment of the paper.\n\n* While the results are impressive, their importance is not clear. A more convincing evaluation would include the effect on downstream tasks, given imputation is only a first step in most pipelines. \n\n* Another point regarding evaluations, is the sole focus on data missing completely at random. While the MER assumption is important, it is the MNER which is a primary focus in many imputation methods.\n\n* The 0/1 continuous encoding of categorical data is unusual, given that binary data is a known challenge for diffusion models (for example in fields like graph generation). Also, the use of mean is inherently problematic due to common multi-modality in the data\n\n* The novelty of the method compared to other approaches is not clearly articulated in the related works section\n\n### Smaller Issues\n* Given the method's novelty isn't specific to tabular data, the related work should include other imputation methods (e.g., image inpainting)\n* A simulation study with multiple modes would be valuable, particularly as diffusion-based models should excel in such scenarios\n* Despite highlighting the importance of initialization in the EM procedure, the paper doesn't address this point. (Particularly relevant given the naive initial imputation approach)\n* It would be interesting to analyze the relationship between delta_t size and ML solution approximation.\n* Figure 4 lacks clarity" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "n/a" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "This paper is well-written and easy to understand. This work combines EM with the diffusion model to improve the potential inconsistency caused by missing values in the training process of diffusion models. Since EM is used with K iterations and N number of samples in the E-step, it would be beneficial to also compare the computational complexity (time complexity) of the proposed method with other diffusion-type methods, either in the discussion of the number of operations or comparing the running time in some of the numerical experiments. This would offer more insights into the proposed methods' performance from different perspectives." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Theoretical analysis: DIFFPUTER’s training step corresponds to the maximum likelihood estimation of data density (M-step), and its sampling step represents the Expected A Posteriori estimation of missing values (E-step). \n2. Extensive experiments that demonstrate the good performance, as compared with existing baselines, of the proposed method across various datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The missing value imputation is a very important problem in both machine learning and statistics. Although deep generative imputation methods have shown promise, there is still substantial room for improvement, even for matching the performance of some traditional machine learning approaches. This paper introduces DIFFPUTER, a tailored diffusion model combined with the Expectation-Maximization (EM) algorithm for missing data imputation, and shows its promising performance on a variety of datasets," }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "the computational complexity is not explicitely discussed or compared on the numerical experiments, see details below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "### Motivation appears to overlook recent work\n\n- The two main issues presented as motivation for this work are unclear. The paper claims that generative imputation methods (i) require joint estimation of observed and missing data distributions and (ii) struggle with conditional inference. I find both statements questionable. Numerous studies adapt deep generative models to estimate only the observed data distribution [1-5], which could serve in the M-step of an EM algorithm. Some of these are even referenced in this paper. Moreover, all of these methods allow for straightforward Monte Carlo estimation of $\\mathbb{E}[p(\\mathbf{x}_m | \\mathbf{x}_o)]$ for the E-step, similar to the proposed diffusion-based model. For instance, a more robust importance-weighted estimator is proposed in [4] (see Eq. (12)).\n\n- This brings me to a second point: if multiple DGMs could, indeed, replace diffusion models within the EM framework, how is diffusion specifically justified for tabular data? This approach might be advantageous for high-dimensional data, where diffusion models effectively approximate $p(\\mathbf{x})$ and avoid lossy compression (as in VAEs). However, given the lower-dimensional datasets studied here, it remains unclear why a VAE-based approach, for example, wouldn’t perform as well as diffusion.\n\n### Experimental section lacks fair comparison and clarity\n\n- Based on my earlier points, I would expect an ablation study comparing different DGMs within the EM algorithm. The baselines in the current experiments appear to rely on simple placeholder values for missing data (e.g., zero or mean imputation), effectively completing only one M-step. This is likely to produce suboptimal results, so a performance gap seems unsurprising.\n\n- The assertion \"Traditional machine learning methods are still powerful imputers\" would benefit from supporting references. I am skeptical, as optimal validation could be harder to achieve in probabilistic settings.\n\n- The claim that generative methods excel on continuous data requires clarification. Here, the diffusion model seems to assume Gaussianity across all dimensions, using $argmax$ as a proxy to obtain discrete outputs, which is not the optimal to model heterogeneous data [2, 3, 5]. \n\n- The statement \"imputation methods are specifically designed for in-sample imputation and cannot be applied to the out-of-sample setting\" also needs elaboration. As mentioned, many DGMs designed for missing data can perform out-of-sample imputation.\n\n- In Figure 2, MissDiff appears to fail or encounter out-of-memory issues. This is surprising, as MissDiff’s architecture is similar to the diffusion network used here.\n\n### Discussion of limitations is lacking\n\n- The main text does not discuss limitations, particularly the high computational cost. A brief note is found in the Appendix, but this isn’t referenced within the primary text. DiffPuter’s approach requires retraining the diffusion model $k$ times, so application to high-dimensional data (e.g., images) would be computationally intense relative to alternatives. I am also curious if the M-step converges faster with higher values of $k$, as this could enhance efficiency.\n\n### Other minor questions\n\n- Figure 6: Why does error decrease as observed data ratio drops? I found the final paragraph of Section 5 somewhat unclear; further clarification here would be helpful.\n\n### Typos\n- Line 515: Change *\", Reducing\"* to *\", reducing\"*.\n\n\n### References\n\n[1] Ma, Chao, et al. \"EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE.\" International Conference on Machine Learning. PMLR, 2019.\n\n[2] Ma, Chao, et al. \"VAEM: a deep generative model for heterogeneous mixed type data.\" Advances in Neural Information Processing Systems 33 (2020): 11237-11247.\n\n[3] Peis, Ignacio, Chao Ma, and José Miguel Hernández-Lobato. \"Missing data imputation and acquisition with deep hierarchical models and hamiltonian monte carlo.\" Advances in Neural Information Processing Systems 35 (2022): 35839-35851.\n\n[4] Mattei, Pierre-Alexandre, and Jes Frellsen. \"MIWAE: Deep generative modelling and imputation of incomplete data sets.\" International conference on machine learning. PMLR, 2019.\n\n[5] Nazabal, Alfredo, et al. \"Handling incomplete heterogeneous data using VAEs.\" Pattern Recognition 107 (2020): 107501." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Robust imputation method based on EM. \n- Well written and structured. \n- The method is theoretically grounded. \n- The empirical analysis is extensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an adaptation of the EM algorithm for missing data imputation, leveraging advanced diffusion-based models to perform precise density estimation in the M-step and provide robust imputations in the E-step, inspired by the RePaint algorithm. The authors reference theoretical analyses from prior work to support the use of diffusion models for density estimation and prove a theorem demonstrating that E-step samples can be drawn from the true conditional distribution. Extensive empirical evaluations highlight the proposed method’s robustness and superiority over various baseline approaches, many of which do not incorporate the EM algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Motivation appears to overlook recent work.\n- Experimental section lacks fair comparison and clarity.\n- Discussion of limitations is lacking.\n- Given these weaknesses, the contribution is not strongly justified." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- l.197: could you specify the choice of $\\sigma(t)$? \n- l.225-227: the paragraph does not correspond to the equation: the negative log likelihood is upper bounded by the loss plus a constant, which does not imply that optimizing the first leads to optimizing the second.\n- Section 5.1, how does the method behave when different masks are present in the training and test set? Does it degrade the performances? \n- Section 5.1, how were the hyperparameter chosen for the different baselines? Are these baselines comparable (in terms of number of parameters for example) with the proposed method? Could you add such a discussion in the Appendix? Could you also describe in details the missing data mechanisms used for the different settings (MAR and MNAR encapsulate a lot of data generating processes)?\n- l.366-367, Can you explain the good performances of the proposed method compared to MissDiff and TabCSDI?\n\n\nl.399 : \"Imputaing\"\nl. 468 : \"Gestrue\"" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors propose a new method to impute missing data for continuous and discrete inputs. The proposed method appears to be new, with excellent performances. An extensive literature review has been done to present and explain the previous approaches to deal with missing values via imputation. The method is clearly explained, the paper well-written, and the experiments show the benefit of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors combine diffusion processes and Expectation-Maximization (EM) algorithms to propose a novel way to impute missing data when both training and tests sets contain missing data. The proposed solution is shown to target the correct conditional distribution (distribution of missing data conditional on observed ones). Imputation values are computed by taken the expectation with respect to this distribution, which is approximated by the sample mean. Experiments on ten real-world data sets show the benefit of the proposed method, compared to various state-of-the-art imputation algorithms (machine and deep learning algorithms)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I only have three remarks: \n- RMSE and MAE are measures that encourage the imputation to target the (conditional) mean or median. In both cases, the target is not a distribution but a single quantity. Recent works (https://arxiv.org/abs/2002.03860) have shown that such measures do not properly evaluate the correctness of an imputation method. Imputation score (https://arxiv.org/pdf/2106.03742) can be used instead to assess the quality of imputations. As the proposed method generates a distribution and not a single point estimate, it is likely that its performance will be higher with respect to this metric, showing that it is able to recover the underlying distribution of the data. Presenting imputation scores in the tables would definitely improve the strength of the paper, in my opinion.\n- The computational performances of DiffPuter should be discussed in the main text. Table 4 is interesting, as it shows that the training time is larger, but not too important. However, the two considered data sets have few features. It would be appealing to consider larger data sets with (i) more observations and/or (ii) more variables to see how the predictive performances and the training time behave. \n- I have trouble understanding the proof of Theorem 1. Notations are confused to me. Adding a table of notations, with exact definitions at the beginning of the Appendix would help. Besides, many approximations are done in the proof : l.730, 731, 750, 753. This results in the theorem being imprecise. For example, nothing is assumed about the quality of the neural network $\\varepsilon_{\\theta}$. What type of convergence is required for Theorem 1 to be valid? Similarly, in Theorem 1, $\\sigma(T)$ is not assumed to be large, whereas it is required in the proof. Please clarify the different assumptions and the proof." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper combines EM algorithm and a Diffusion model for missing data imputation" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024unleashing,\ntitle={Unleashing the Potential of Diffusion Models for Incomplete Data Imputation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3fl1SENSYO},\nnote={under review}\n}" }, "abstract": { "value": "Generative models play an important role in missing data imputation in that they aim to learn the joint distribution of full data. However, applying advanced deep generative models (such as Diffusion models) to missing data imputation is challenging due to 1) the inherent incompleteness of the training data and 2) the difficulty in performing conditional inference from unconditional generative models. To deal with these challenges, this paper introduces DiffPuter, a tailored diffusion model combined with the Expectation-Maximization (EM) algorithm for missing data imputation. DiffPuter iteratively trains a diffusion model to learn the joint distribution of missing and observed data and performs an accurate conditional sampling to update the missing values using a tailored reversed sampling strategy. Our theoretical analysis shows that DiffPuter's training step corresponds to the maximum likelihood estimation of data density (M-step), and its sampling step represents the Expected A Posteriori estimation of missing values (E-step). Extensive experiments across ten diverse datasets and comparisons with 17 different imputation methods demonstrate DiffPuter's superior performance. Notably, DiffPuter achieves an average improvement of 8.10\\% in MAE and 5.64\\% in RMSE compared to the most competitive existing method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion models", "missing data imputation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/064ab24549398f6caaea3170c576327f638889bd.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Unleashing the Potential of Diffusion Models for Incomplete Data Imputation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3flhuT2QGB
Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation
main
Active
Robotic Manipulation;Vision-Language-Action Models
applications to robotics, autonomy, planning
5;5;6;6
4;4;3;4
3;3;3;3
3;3;2;3
2;3;3;3
5.5
3.75
3
2.75
2.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "### Questions\n1. From the current experiments, this method does not seem to solve problems beyond what Robot-Moo or RoboPoint achieve. Any thoughts on how this method could outperform approaches based on explicit representations like bounding boxes or points?\n2. It would be easier to understand if the conditioning feature were illustrated in Figure 2.\n3. The authors claim that OpenVLA and Octo serve as generalist models, but they do not generalize effectively in all cases. For instance, the OpenVLA paper mentions challenges with out-of-distribution (OOD) cases, reflective surfaces, unseen action spaces, and actions along the depth axis. Given this, OpenVLA may not be an ideal generalist. Does this imply that RoboDual’s generalization is limited by OpenVLA’s capabilities?\n4. In Figure 3, OpenVLA outperforms RoboDual in the \"Knock down object\" task. Can the authors explain why this is the case?\n5. Additional real-world experiments on generalization ability would be beneficial. Including a baseline setting where Diffusion Policy/OpenVLA performs reasonably well would also help clarify RoboDual's improvements, given the claim that OpenVLA mainly provides generalization ability in this setup.\n6. Perhaps I missed it, but is the Diffusion Policy baselines in your experiments the modified versions (specialist only), or the originals? This distinction is important, as a significant improvement from switching the transformer backbone to DiT may impact the novelty.\n7. Is the OpenVLA in your experiments the same model used as the generalist in RoboDual (generalist only)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Effective Synergy**: The RoboDual framework innovatively combines a generalist and specialist model, leveraging OpenVLA’s generalization and Diffusion Policy's efficiency. This approach addresses a gap in imitation learning by merging generalist adaptability with specialist precision.\n- **Experimental Results**: Both simulation and real-world experiments show significant performance gains over state-of-the-art baselines in generalization and task-specific adaptation, highlighting RoboDual's potential in practical settings.\n- **Well-Written and Accessible**: The paper is clear, well-organized, and easy to understand, making the novel approach and its implications accessible.\n- **Open-Source Commitment**: The authors promise to release the code, which could foster further research and replication." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates a pertinent question in imitation learning: how to combine the generalization capability of models like OpenVLA with the accuracy and task-specific precision of methods such as ACT or Diffusion Policy. To address this, the authors propose a novel framework, RoboDual, which uses the intermediate tokens in OpenVLA to condition a modified Diffusion Policy. Through simulation and real-world experiments, the proposed method demonstrates substantial performance improvements over both generalist and specialist baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Real-World Experiments on Generalization** Although the framework shows promise, its real-world experiments, particularly those evaluating generalization capabilities, remain limited. Conducting additional experiments—such as testing with a wider variety of novel objects (beyond just banana to eggplant), introducing more distractors at varied locations, using diverse language instruction templates, varying lighting conditions, or providing more detailed descriptions of the existing experiments—would significantly bolster the case for RoboDual’s practical applicability.\n\n2. **Insufficient Introduction to the CALVIN Dataset** Including illustrative images in Section 4.2 to showcase the training and test settings of the CALVIN dataset would enhance readers' understanding of the experiment RoboDual run in simulation.\n\n3. **Improved Color Differentiation in Bar Charts** The colors representing Octo, OpenVLA, and Ours (single/multi) in the bar figures are difficult to distinguish. Selecting more visually distinct colors would improve clarity and make comparisons easier.\n\n4. **Failure Analysis** It is hard to tell which part is the bottleneck for the current method. A failure analysis will be helpful." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Table 3, it is interesting that transferring to an unseen background (checkered to solid-white tablecloth) results in 30% or greater drop in performance for all models. Do you have a hypothesis for why this is the case? One would expect the generalist models and RoboDual to be more robust to background texture. \n- What was the reason for choosing joint space control over end-effector control?\n- In Section 3.3, it says that the specialist model is trained for \"one hour\" but in Section 4.4 it says \"one hour of training on a node equipped with eight A100 GPUs\". Is this the same \"one hour\"? If so, updating Section 3.3 to \"8 gpu-hours\" would be more accurate.\n- It seems that fine-tuning the generalist policy to predict discretized actions in a specific robot's action space makes it no longer \"generalist\". Have you thought of other ways to condition the low-level policy that might allow one to deploy the system on different robot types?\n\nTypos: Figure 6a legend: (w/o Action L**e**tent)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper proposes a scheme for combining a generalist VLA model with a specialist low-level policy model, via conditioning on latent representations and discretized actions. This approach relies on the low-level policy to process non-vision-language inputs (like depth, proprioceptive, or tactile info), so the VLA does not need to be fine-tuned extensively.\n- RoboDual can be trained much faster than a fully generalist approach, since the action predictions of an under-trained VLA model are refined by the specialist model that trains quickly.\n- The experiments in Section 4.4 show that RoboDual is very sample efficient, achieving 73.3% success rate at 5 demos on real-world tasks. This indicates that the coarse action predictions of the generalist are helpful and enable the specialist to refine the actions with limited data.-\n- RoboDual can be run at 15Hz at inference, compared to 4Hz for openVLA. This difference is significant, since jumpy movements of the robot prevent it from solving tasks that require precision." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new method for solving language-conditioned, robotic manipulation tasks. The proposed method, RoboDual, combines an vision-language-action (VLA) model for high-level task understanding and long-horizon reasoning with a low-level policy to handle spatial reasoning and precise movements. The two models are integrated together by passing discretized action predictions and latent representations from the generalist model to the specialist model. A key benefit of their approach is the ability to run at higher control frequencies (20Hz), which is necessary for many dynamic manipulation skills, since they do not rely on the generalist to make predictions at every time step. RoboDual is evaluated on a suite of challenging manipulation tasks in simulation and the real-world, where it outperforms all baselines in terms of success rate and shows strong robustness to across task variations. RoboDual also outperforms baselines even in settings where the amount of data is significantly reduced." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The ablation study needs some work. Please switch to a different color map so that it is easier to distinguish the bars and read the legend. The axis range makes it look like the ablations have a substantial impact on model performance, even though the difference in performance is minimal. There should be error bars, otherwise it is difficult to determine whether a 0.03 increase in average time is significant. The discussion of these results should also be changed to better reflect the actual results. For instance, it is not true that \"each conditioning source from the generalist model plays an **essential** role in ... enhancing overall performance\" [emphasis mine] if removing the conditioning decreases average length by at most 4%. \n- There are some instances where the wording could be improved. In Figure 1 caption: \"the fast specialist policy *obsesses* ...\" (achieves?). Top of page 2, \"The *yielding* policy\" (The resulting policy). Bottom of page 2, \"We bring in a novel approach\" (We introduce? a novel approach). Beginning of Section 3.3, \"Disparate from\" (Unlike?).\n- The first contribution says, \"Our methodology ... paves the way for the practical application of VLA models\". This is quite a broad claim. I believe you are hinting at the computational efficiency of the dual-system. Perhaps modify this to say \"practical application of VLA models to higher-frequency control tasks\"." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the performance vary with different sensory input combinations, and could simpler setups still achieve competitive results while offering advantages in runtime efficiency?\n\n2. How well does RoboDual perform in more dynamic or even user-interactive settings (e.g. moving an object while a trajectory is being executed)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Combining a high-level generalist and a low-level specialist model is a compelling paradigm to enable broader generalization while maintaining more fine-grained control.\n\n2. RoboDual achieves higher performance with fewer demonstrations and limited computational requirements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces RoboDual, a dual-system framework combining generalist and specialist policies for robotic manipulation. RoboDual leverages both i) a generalist’s broad generalization capabilities with ii) a specialist’s task-specific precision and efficiency. The generalist is a large-scale pretrained vision-language-action model which provides high-level guidance, while the specialist is a diffusion policy which facilitates rapid, precise control." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A bi-level policy increases model complexity and therefore inference time, which may affect performance in low-latency tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- **Line 26**: Could you clarify why \"with\" is italicized?\n\n- **Line 268**: The description of the shifted-window conditioning mechanism is somewhat unclear. Why are only $k_g - \\tau_s$ generalist actions sampled as conditioning rather than using the entire chunk of $k_g$ actions?\n\n- **Line 197**: There appears to be a duplicated closing parenthesis in \")), \\etc\". Could you confirm if this is an error?\n\n- In the experiment described in Figure 5, is the generalist model in the dual approach frozen? If it is frozen, are the weights solely from OpenVLA, or has it been further fine-tuned on CALVIN?\n\n- Is the VLA model strictly necessary as the generalist model? If a vision-language model (VLM) were used to extract conditions instead, would this achieve comparable performance to RoboDual?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The figures in the paper are thoughtfully designed and highly informative, significantly aiding readers in understanding the proposed methods and results.\n\n- The dual-system approach aligns well with principles from cognitive science, making its application to embodied manipulation both insightful and innovative. Implementing this concept in robotics is a valuable contribution to the field.\n\n- The extensive experimental results provide strong evidence of the model's advantages in achieving generalizable real-world manipulation. *RoboDual* outperforms both generalist and specialist-only models, demonstrating superior training and data efficiency, which highlights its practical value for broader real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To enable efficient fine-tuning and inference of VLA models, while not compromising generalisability, the article presents RoboDual, a dual-system approach that combines generalist and specialist policies to enhance robotic performance in dynamic environments. The generalist offers adaptability, while the specialist ensures efficient task execution. This synergy results in significant improvements in both real-world and benchmark tasks, offering strong performance with minimal data and higher control efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- When considering the system from another perspective: viewing the generalist as a vision-language model (VLM) and the specialist model as an action head, RoboDual can be seen as an asynchronous variant of Octo (Ghosh et al., 2024). This diminishes the novelty of the proposed approach, as the heterogeneity between the two systems mainly lies in the data and model scale, which impacts generalizability. It raises the question of whether scaling up the training data for the specialist model might yield comparable performance to the RoboDual system in terms of both computational efficiency and generalizability. Given that DiT is a scalable architecture, and considering the limited dataset used for experiments in Figure 5 (CALVIN), it would be valuable to explore this.\n\n\n One way to better distinguish these two systems is to draw more deeply on cognitive science concepts, such as viewing one system as responsible for reasoning (akin to System 1) and the other for actuation (akin to System 2). For example, in a task like \"write down the answer of 1 + 1 on the blackboard,\" the reasoning required to determine the answer is challenging for the specialist system alone. Highlighting such distinct roles could provide a more fundamental differentiation between the two systems.\n\n> Dibya Ghosh, Homer Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joey Hejna, Tobias Kreiman, Charles Xu, et al. Octo: An open-source generalist robot policy. arXiv preprint arXiv:2405.12213, 2024.\n\n- The experiments on training efficiency could be improved. The use of DistillBERT might not be sufficient for capturing the semantics of actions and target objects from language instructions. I would suggest adding two additional baselines:\n\n 1. Using T5 to encode language for the specialist-only model to ensure sufficient semantic extraction. Since language encoding is performed once per rollout, T5-xxl might be a suitable choice.\n\n 2. GPU hours may not be the best metric for measuring efficiency, as it does not account for the number of parameters. I recommend switching the x-axis metric to FLOPs for a more accurate representation of computational efficiency." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a synergistic dual-system framework that leverages the strengths of both generalist and specialist policy and paves the path to the practical deployment of VLA models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3flhuT2QGB},\nnote={under review}\n}" }, "abstract": { "value": "The increasing demand for versatile robotic systems to operate in diverse and dynamic environments has emphasized the importance of a generalist policy, which leverages a large cross-embodiment data corpus to facilitate broad adaptability and high-level reasoning. However, the generalist would struggle with inefficient inference and cost-expensive training. The specialist policy, instead, is curated for specific domain data and excels at task-level precision with efficiency. Yet, it lacks the generalization capacity for a wide range of applications. Inspired by these observations, we introduce RoboDual, a synergistic dual-system that supplements the merits of both generalist and specialist policy. A diffusion transformer-based specialist is devised for multi-step action rollouts, exquisitely conditioned on the high-level task understanding and discretized action output of a vision-language-action (VLA) based generalist. Compared to OpenVLA, RoboDual achieves 26.7% improvement in real-world setting and 12% gain on CALVIN by introducing a specialist policy with merely 20M trainable parameters. It maintains strong performance with 5% of demonstration data only, and enables a 3.8$\\times$ higher control frequency in real-world deployment. Code would be made publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Robotic Manipulation", "Vision-Language-Action Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/971c7b119fc885fa8f90545e7c938f9058a0564d.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3fuPS85ekI
Adapting Communicating MLLMs on the Fly in Referring Expression Tasks
main
Active
Multimodal Large Language Models;Online Adaptation;Referring Expressions
reinforcement learning
5;5;5;6
3;5;3;2
2;2;2;2
2;3;2;2
2;2;3;2
5.25
3.25
2
2.25
2.25
-0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The ZSL baseline seems insufficient. What about comparison with few-shot learning? eg. Given the trajectory of the past prediction results as context, would the speaker learn to better describe the object? \n2. What are the costs of introducing RL learning? Would be useful to add an analysis.\n3. It would be interesting to test the speaker's final understanding of the , eg, would the speaker be able to identify that the listener is color-blind in the end?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper proposes an interesting setting of the communication between MLLMs. The paper proposes to have one speaker and one listener, where the speaker need to address the best way to communicate with the listener so that the listener can arrive at the correct answer. This setting is novel.\n2. The paper proposes on the fly adaptation based on the listener's feedback. The real time adaptation is interesting and useful.\n3. The paper conducts thorough experiments on three RL algorithms and demonstrates the effectiveness of KTO. The experiments provides a thorough comparison between different RL algorithms.\n4. The writing is generally clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies an interesting problem setup: how to adapt MLLMs to perceptual misunderstandings. The paper introduces a novel framework of having a speaker MLLM and a listener MLLM, where the speaker MLLM need to learn adaptation to the listener MLLM on the fly so that the listener MLLM can come up with the correct answer. The paper proposes two settings where the MLLM can have misunderstanding: color blindness and blurry images. The paper tested on 3 RL algorithms to do online adaptation: PPO, NLPO, KTO, and found that KTO attains the best performance. The paper also provides qualitative results of the response difference between the adapted MLLM and the original MLLM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks comparison with more baseline methods. Some simple and training-free personalization methods in MLLMs might directly solve this problem better. Eg, adding the experiment of few-shot/one-shot learning would be a useful comparison with the online adaptation method that the paper proposes. RAG methods with a memory augmented module. Or some implict/explicit prompt engineering techniques.\n2. The qualitative analysis is not thorough enough. Eg, in Figure 7, the author noted that the adapter-generated description is better because it has less color attributes. However, this is still a surface level analysis as there are many differences between the two descriptions generated, such as length. It would be better to conduct a deeper analysis of the comparison between the adapter generated, such as the response length.\n3. The paper covers the scope \"color blind\" and \"blur\" as the two attributes of the listener. It is not clear to me how these two attributes are chosen and how they align with real-world misunderstanding between MLLMs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Examples in Figure 8 are particularly bad in terms of the quality of the description. How do you explain this? Have you thought about mitigating this language drift problem?\n\n2. Why did you use PaliGemma only for the RES task?\n\n3. It's not clear to me the example in Figure 2. The example seems very unfortunate because it's hard to discriminate two birds when the image is black and white." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Studying continuous adaptation of Vision and Language Models is definitely an interesting topic that should be explored more by the community\n\n2. The authors test different training regimes using techniques such as KTO, PPO and NLPO. The evaluation uses different models such as Llava-7B and Llava-13B which makes the experiments very reproducible by the community" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors study the efficient online adaptation of agents implemented as Multimodal Language Models (Vision language models more specifically). Particularly, the authors study this online adaptation with both a reference identification task (REI) and a reference segmentation task (RES). In REI, a listener needs to select the correct target image between 2 images using a description generated by a speaker. For the RES task, the speaker needs to generate a description for an object in an image and the listener has to derive a segmentation mask for it.\n\nIn this paper, the authors study how the well-known RLHF algorithms can be adapted to the online setting which is more challenging because they typically refer to single-turn data rather than dialogues with noisier rewards. For their evaluation, they test different SOTA VLMs on images derived from relatively standard benchmarks such as COCO, ImageNet, etc.\n\nThe results highlights that adaptation seems to have a negative effect on the quality of the descriptions which diverge to very unnatural ones which do not include any object attributes compared to the Zero-shot variants that are much more descriptive." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors consider referential games with only two images which incredibly reduces the ambiguity of the task. Additionally, they do not compare with existing literature from multi-agent language evolution (e.g., [1])\n\n2. It's not clear to me to what extent the benchmarks that the authors have used are completely unseen by the models. For instance, it's very likely that RefCOCO is part of the Llava fine-tuning considering that they use MSCOCO images. Authors should pay more attention to the problem of data contamination which I believe was ignored by the authors.\n\n3. The models used in the evaluation are not up to date considering that there are many strong variants such as Llava-1.5, QwenVL-2, Llama-3.2 and Molmo. I would suggest the authors provide additional results with these baselines to make the results much stronger.\n\n4. The authors should clarify the way the different models are adapted. Do they always adapt the speaker or only the listener? This is an important research question that I think is not clearly highlighted by their evaluation.\n\n5. Their models are clearly affected by language drift during the adaptation procedure. I believe the authors should focus on a more detailed analysis of the language developed by the models and how it changes over the different games. This should also be compared to utterance length and vocabulary size to verify whether models are simply maximising success rate and forgetting their language abilities.\n\n## References\n\n[1]: Lazaridou, A., & Baroni, M. (2020). Emergent multi-agent communication in the deep learning era. arXiv preprint arXiv:2006.02419." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In figure 8, the ZSL experiments seem to be quite low-quality captions of the image. It seems like the prompting could have quite large impacts on ZSL performance.\n- How precisely are images provided to the LLMs (through tiling, or separate image addition)? Models such as Llava are not designed for multi-image reasoning, and so it is important to correctly work around those limitations. \n- Does ZSL performance improve when the prompt indicates that the listener may have some kind of impairment (or the impairment is explicitly specified)? \n- Does Figure 10 really show a divergence effect? Is this unexpected, or just an artifact of gradient-based optimization? It seems like in general the trend is increasing, as would be expected from RL agents. Further, does Figure 10 plot validation or test accuracy? If it's plotting test accuracy, this would indicate a significant issue in the evaluation methodology, since the model is being tuned on the test set.\n- It would be really helpful if Figure 3 was presented as a table instead of a radar chart. Because the axes have no relationship to each other, the shapes are generally misleading, and the chart makes it quite difficult to understand finer grained performance details.\n- In general, some more tables would be appreciated, since locating all of the comparative numbers within the paper is quite time consuming. Further, Figures 5,6, and 9 are impossible to read clearly without knowing the base numbers, and might be better as tables.\n- It would be interesting to see some failure cases of the model. What is happening when miscommunications occur? \n- How do the chosen reinforcement learning algorithms (PPO, KTO, NLPO) compare in terms of training stability? The results in Fig. 10 seem to be from a single run - are the results different across runs? \n- Does the use of LoRA impact the adaptation performance compared to fine-tuning all of the parameters? \n- Are there model size effects (i.e. using 7B vs. 13B)? \n\nSome additional minor comments:\n- The descriptions of RLHF, along with PPO, KTO, and NLPO in Section 3.1 take up a lot of space, and could be moved to the appendix in favor of additional analysis, qualitative results, or tables.\n- Is random performance on the task 0.5 accuracy? If so, it would be nice to explicitly clarify that in the paper (since random performance is mentioned on L840). If not, it would be good to know.\n- It would be interesting to investigate a wider range of perceptual weaknesses (for example, resolution, partial occlusion, field of view, focal length (blurring at different depths), spatial distortion, inverted colors, etc.). \n- The motivation for the specific dataset selection is somewhat unclear, and it would be good to have improved motivation as to why, precisely, these datasets were chosen. \n- How expensive (computationally) are these experiments? How long does the average rollout take, and the average experiment?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper presents an original approach by exploring real-time adaptive capabilities in MLLMs using RL to dynamically adjust descriptions in communication tasks. This is a relatively novel direction, and is quite an interesting application domain. I find the idea of introducing perceptual weaknesses into the AI to be quite a novel idea - and of great interest - as I think that very few approaches have focused on specializing to perceptual weaknesses. As far as I am aware, there is no other work that studies the idea of having conversational interactions with visually impaired listeners, and looking at how models are capable of handling such situations.\n- The paper clearly demonstrates that online RL-based adaptation can improve performance on the scenario tasks. \n- The paper's results and claims are quite clear, and easy to understand, and the data presented (fairly well) supports those claims." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel framework for referring expression tasks in which two MLLM agents attempt to communicate with each other to communicate information about images in a scene. The dataset/task, based on CLEVR, CUB, ImageNet and RefCOCO is constructed by sampling two images, with one of those images designated as the target image. The \"speaker\" is then asked to the describe the \"target\" image relative to the other image, and the \"listener\" attempts to identify the target image from the two images. This paper evaluates several speaker and listener MLLMs on this task, as well as fine-tunes the speaker MLLM using reinforcement learning, and demonstrates several effects including that adaptation improves listener performance (KTO adaptation improved the best listener accuracy from 0.58 (LLava-13B) to 0.67 on CLEVR and that certain attributes matter more than others (Color and shape attributes were crucial for performance, with GPT-4V’s accuracy dropping from 0.99 to 0.84 on CLEVR when color was omitted)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Given the strengths, there are also several weaknesses:\n- There is no statistical analysis of the data, so it's quite challenging to tell if there are statistically significant differences between the methods. While the absolute differences are large, the fact that the dataset size is somewhat small (accuracy over 300 episodes), might lead to relatively high variance in the accuracy metrics, and it would be nice to have that variance reported in the paper (especially when significant differences are claimed: L319, L32, L370, L376, L427). \n- It's not really clear to me how challenging this task is. Because the images are selected at random, it seems likely that not that much information needs to be communicated between the agents to correctly select the target image/complete the target task. That seems to contrast with the difficulty of the problem for the agents (with the exception of GPT-4v, which seems to perform quite well at the task, achieving almost 100% accuracy). This suggests to me that with some prompt tuning, open models could achieve much higher accuracies as well. The paper would benefit from an improved approach to selecting hard negatives, which might help increase the difficulty of the task. \n- It's not clear if the interactions are actually multi-turn (as indicated by Nx in Figure 1), or if the interactions that the agents have are merely single-turn interactions (as seems to be the case in Figure 7 and Figure 8). While it makes sense to have single-turn interactions for simplicity, I think that claiming that MLLMs are \"adapting\" in the case of single-turn interactions is quite weak. Ideally, the \"conversation\" should have more than one turn where the speaker must determine the kind of impairment or confusion that the listener has, and then adjust to that, rather than adjust to global speaker impairments over time. \n- Several of the effects mentioned in the paper seem to be caused by poor prompting of the speaker MLLMs, rather than actual failures during the task. For example, the effect of visual prompting mentioned on L494, or the non-specific descriptions in Figure 8. It also seems like the descriptions are generally not comparative (Fig 7) - which seems to indicate that the models aren't taking into account multiple images during the prompting process. GPT-4v is rather robust to these prompting issues, and has considerably better performance, so I wonder if that is the underlying cause of many of the effects in this paper.\n- The paper only investigates the LLaVA-7B speaker, and does not look at other speaker agents. It would be nice to see if these effects are generalizable to other speaker agents." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could the authors elaborate on the variance between different adaptation algorithms across datasets, especially why KTO performed better in RES tasks?\nWere there any attempts to test the trained models with real human interactions? This could validate the practical applicability of the proposed methods.\nHow would the proposed method handle more complex perceptual challenges, like occlusion, or scenarios with multiple perceptual weaknesses simultaneously?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Originality: The concept of online, real-time adaptation to perceptual weaknesses through reinforcement learning in MLLMs is innovative and provides a step forward for personalized multimodal interactions.\nQuality: The methodology is comprehensive, with experiments covering diverse datasets and models, adding to the robustness of the findings.\nClarity: The explanation of the reinforcement learning algorithms (PPO, KTO, NLPO) is well-articulated, as is the application of LoRA for efficient parameter tuning.\nSignificance: This work addresses a vital aspect of real-time communication adaptation for AI models, potentially making them more inclusive and functional in real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper, \"Adapting Communicating MLLMs on the Fly in Referring Expression Tasks,\" explores whether multimodal large language models (MLLMs) can adapt to their communication partners’ perceptual weaknesses, such as color blindness or blurred vision, in real time. Using referring expression identification (REI) and segmentation (RES) tasks, the paper evaluates how well MLLMs can fine-tune their responses to improve interaction with varying levels of listener comprehension. Through on-the-fly reinforcement learning and using LoRA adapters for efficient fine-tuning, the authors test different MLLMs (LLaVA, Qwen, and PaliGemma) and adaptation algorithms (PPO, KTO, NLPO) on datasets such as CLEVR, CUB, ImageNet, and RefCOCO. The results indicate that online adaptation, especially through KTO, enhances task performance and communication efficacy for MLLMs, revealing the potential for personalized MLLM applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Limited Adaptability: While KTO shows improvement, the adaptation results vary across different tasks and MLLMs, and the paper lacks an exploration of methods to enhance consistency across different perceptual impairments.\nLack of Human Interaction: Although the study uses MLLM-MLLM interactions, the paper could be strengthened by experiments involving human listeners, which would provide a clearer perspective on practical applications.\nEvaluation Scope: The paper could further assess performance over a broader range of perceptual weaknesses beyond color blindness and blur, such as partial occlusion or noise." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024adapting,\ntitle={Adapting Communicating {MLLM}s on the Fly in Referring Expression Tasks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3fuPS85ekI},\nnote={under review}\n}" }, "abstract": { "value": "Multimodal Large Language Models (MLLMs) exhibit varying comprehension levels in language and perception that complicate interacting with a diverse population of agents, similar to how miscommunication happens in humans, e.g., because intentions are not always known.\nIn this work, we investigate whether MLLMs can adapt to the perceptual weaknesses of the communication partners in an online manner, i.e. change the way they describe their environment in a way that is understandable to their partner while communicating with them, via reinforcement learning.\nWe experiment with two tasks: referring expression identification (REI) and referring expression segmentation (RES), where a speaker agent has to describe an object, and a listener has to identify it.\nTo be successful, the speaker agent must discern the comprehension level of the listener and adapt accordingly, especially when the listener suffers from perceptual weaknesses such as color blindness or blurred vision.\nUnlike traditional offline alignment methods for LLMs, we fine-tune a Multimodal LLM (MLLM) online to adapt to other agents' conceptual understanding. Our experiments with four MLLMs on four datasets show that online adaptation is feasible in both REI and RES settings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Models", "Online Adaptation", "Referring Expressions" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/cc266a562768018363ad0af9e0db0516c74fc49e.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Adapting Communicating MLLMs on the Fly in Referring Expression Tasks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3g2iyFU8gA
Learning Fused State Representations for Control from Multi-View Observations
main
Active
multi-view learning;reinforcement learning
reinforcement learning
5;5;5;5
3;3;4;4
3;3;3;3
2;3;2;2
3;3;3;3
5
3.5
3
2.25
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I recommend the authors systematically compare the similarities and differences between their method and Seo et al.'s masked multi-view RL approach within the main text." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is clearly written and easy to understand.\n- The proposed method that integrates bisimulation metric learning into the fusion process of multi-view states is reasonable.\n- The authors have provided extensive experimental results, covering various visual RL environments, to validate the effectiveness of the method. The paper also includes experiments with missing views as well as additional visualizations to interpret the effectiveness of the method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method that combines a bisimulation-based approach with masked representation learning for multi-view reinforcement learning. The core idea is that to enable task-relevant multi-view fusion, it is essential to align the integration process closely with the specific objectives of the task. In other words, when fusing information from multiple views, the task’s specific goals (Equation 8) must be considered. The authors have evaluated their method on two visual control environments, including Meta-World and PyBullet, demonstrating significant performance improvements over baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concerns involve the novelty of the method and the completeness of experimental comparisons:\n\n- The primary limitation lies in the method's novelty. Although the authors present two core challenges of multi-view RL in the introduction, these challenges have already been extensively explored in prior research. While incorporating bisimulation metrics into state aggregation is reasonable, bisimulation-based methods are also well-covered in existing RL literature, making this combination feel more like a natural choice than a groundbreaking innovation.\n- Although the authors conducted extensive experiments and validated the effectiveness of their approach against various existing multi-view RL methods, there are still two main gaps. First, there is no experimental verification of whether the method remains superior to baseline models in cases with missing views (even with a single view). Second, Seo et al. (2023) proposed the masked world model, which performs well on multi-view RL tasks and has methodological similarities to the approach in this paper. A direct comparison with Seo et al.'s work would provide stronger support for the effectiveness of this method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The writing is relatively clear.\n\n2. The performance of the proposed method is validated on Meta-World and Pybullet benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the Multi-view Fusion State for Control (MFSC), which integrates a self-attention mechanism and bisimulation metric learning to fuse task-relevant representations from multi-view observations, and incorporates a mask-based latent reconstruction auxiliary task to obtain more compact fused representations and handle missing views." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The author incorporates bisimulation principles by integrating reward signals and dynamic differences into the fused state representation to capture task-relevant details. As I am aware, [1] also acquires representations for control with bisimulation metrics. Additionally, the author employed a Mask-based Latent Reconstruction strategy, which is analogous to that in [2]. Does this similarity suggest a deficiency in significant innovation or does the author offer additional components or enhancements that differentiate it from the existing strategies in [1] and [2]? Furthermore, it is essential to determine whether appropriate credit and comparison with the prior works in [1] and [2] have been adequately accounted for.\n\n[1] Learning invariant representations for reinforcement learning without reconstruction.\n\n[2] Mask-based Latent Reconstruction for reinforcement learning。\n\n3. Missing many recent visual RL baselines: the baselines used in the paper are all old methods and a large body of the recent methods developed on visual reinforcement learning are ignored [1][2].\n\n[1] TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning.\n\n[2] Mastering Diverse Domains through World Models.\n\n4. Whether this method is only useful for robot control tasks needs to be further verified on more types of environments, such as Carla, atari, etc.\n\n5. The paper lacks sufficient ablation experiments. The author only ablated MFSC without bisimulation constraints ('MFSC w/o bis') and MFSC without Mask and Latent Reconstruction ('MFSC w/o res'), but not more detailed parts like the Self-Attention Fusion Module.\n\n6. The author claims that MFSC can be seamlessly integrated into any existing downstream reinforcement learning framework to enhance the agent's understanding of the environment. However, there are no relevant experiments to verify this claim." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe paper addresses the challenging and significant problem of learning task-relevant fused state representations from multi-view observations, which is a crucial aspect of multi-view reinforcement learning. \n2.\tThe integration of a mask-based latent reconstruction task enhances the model’s ability to learn cross-view information. The proposed approach, combining self-attention and bisimulation metrics, offers an effective solution.\n3.\tThis paper demonstrates the effectiveness of MFSC across multiple challenging benchmarks, including robotic manipulation tasks in Meta-World and control tasks in Pybullet." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel architecture called Multi-view Fusion State for Control (MFSC), designed to learn compact and task-relevant representations from multi-view observations in reinforcement learning (RL). This approach integrates a self-attention fusion module with bisimulation metric learning to aggregate information from different views, while also using a mask-based latent reconstruction auxiliary task to promote cross-view information aggregation. Experiments conducted on Meta-World and Pybullet demonstrate the superiority of MFSC over other methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThis paper does not include comparisons with approaches tailed for visual RL, such as [1-2], particularly multi-view visual RL method like [3]. Evaluating MFSC against such baselines would provide a more accurate assessment of its effectiveness and novelty.\n2.\tHow does the computational complexity of MFSC compare to baseline approaches in terms of training time, inference time, and resource requirements?\n3.\tThis paper does not provide sensitivity analyses of MFSC with respect to different hyperparameters, such as the weight of fusion loss and the weight of reconstruction loss.\nReferences\n[1] Hafner et al. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104, 2023.\n[2] Seo et al. Masked world models for visual control. CORL, 2023.\n[3] Seo et al. Multi-view masked world models for visual robotic manipulation. ICML, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Overall, while the MFSC architecture presents a promising direction for multi-view reinforcement learning, addressing the outlined weaknesses and incorporating the suggested improvements will significantly enhance the paper's clarity, depth, and impact in the field." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tClear statements and good structure. The paper is well-structured, and viewpoints was stated logically. The introduction provides a good overview of the challenges in the multi-view representation learning task and approach to address them relatively. Also illustrate provided along with methods made it easy and vivid.\n2.\tSufficient and solid proof in major conclusions. Problems were clearly defined and followed by mathematical formulations with clear explanation and ended with a solution with validate experiments. \n3.\tComprehensive experiment and supportive solution ,also contributions made by this method were shown vividly and clearly through several comparative illustrate shown in the part of Experiments. \n4.\tReproductive experiment with project code and data shared. Experiments result can be verified personally by readers with resources provided in this paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel approach named Multi-view Fusion State for Control(MFSC),which ingrates a self-attention mechanism with bisimulation metric learning to fuse task-relevant representation from multi-view observation. Additionally, the paper also incorporated a mask-based latent reconstruction auxiliary task to learn cross-view information in order to foster more compact fused presentation. In this paper, two major problems were solved : First is Higher data dimensions and more redundant information , and Informative aggregation of representation from various views." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "•\tA few formula faults are discovered in the paper.\n•\tEvaluation Metrics: The evaluation metrics used in the experiments could be more comprehensive. Currently, the focus appears to be on task performance, but including metrics that assess representation quality (e.g., reconstruction loss) would provide a fuller picture of the model’s effectiveness.\n•\tGeneralization to Other Tasks: The experiments are primarily conducted on Meta-World. To evaluate the generality of the approach, the authors should consider applying MFSC to other control tasks or environments. This would help demonstrate the versatility and broader applicability of the proposed method.\n•\tLimitations Discussion: The paper should include a dedicated section discussing the limitations of the proposed method. Identifying potential weaknesses and suggesting avenues for future work would add depth to the contribution." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We proposed a method to learn fused state representations for multi-view RL." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Fused State Representations for Control from Multi-View Observations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3g2iyFU8gA},\nnote={under review}\n}" }, "abstract": { "value": "In visual control tasks, leveraging observations from multiple views enables Reinforcement Learning (RL) agents to perceive the environment more effectively. However, while multi-view observations enrich decision-making information, they also increase the dimension of observation space and introduce more redundant information. Thus, how to learn compact and task-relevant representations from multi-view observations for downstream RL tasks remains a challenge. In this paper, we propose a Multi-view Fusion State for Control (MFSC), which integrates a self-attention mechanism with bisimulation metric learning to fuse task-relevant representations from multi-view observations. To foster more compact fused representations, we also incorporate a mask-based latent reconstruction auxiliary task to learn cross-view information. Additionly, this mechanism of mask and reconstruction can enpower the model with the ability to handle missing views by learning an additional mask tokens. We conducted extensive experiments on the Meta-World and Pybullet benchmarks, and the results demonstrate that our proposed method outperforms other multi-view RL algorithms and effectively aggregates task-relevant details from multi-view observations, coordinating attention across different views." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "multi-view learning", "reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f09cbf35e518b383f42e81d3bcbbbce30a6c8430.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning Fused State Representations for Control from Multi-View Observations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3g7HuQ8avZ
OmniContrast: Vision-Language-Interleaved Contrast from Pixels All at once
main
Active
vision-language contrastive learning
applications to computer vision, audio, language, and other modalities
5;5;6;6
3;4;4;2
2;2;3;3
3;2;3;3
3;3;2;3
5.5
3.25
2.5
2.75
2.75
-0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed.", "Yes, Discrimination / bias / fairness concerns", "Yes, Privacy, security and safety", "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)", "Yes, Potentially harmful insights, methodologies and applications", "Yes, Responsible research practice (e.g., human subjects, data release)", "Yes, Research integrity issues (e.g., plagiarism, dual submission)", "Yes, Unprofessional behaviors (e.g., unprofessional exchange between authors and reviewers)", "Yes, Other reasons (please specify below)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. OmniContrast is among first to explore vision-language correspondence on image-text interleaved web documents in CLIP-style.\n2. Authors propose three consecutive information retrieval benchmarks, including AnyCIR, SeqCIR, and CSR to o facilitate the evaluation of omni-modality understanding.\n3. The effectiveness is validated by experimental results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed OmniContrast, a unified contrastive learning model that processes multi-modal web documents by transforming all modalities, including text, into pixel space for a single vision model to handle. It achieves competitive or superior performance compared to standard CLIP models, demonstrating the value of multi-modal web data for advancing vision-language learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am concerned about the motivation with the single modality in the pixel space. I believe it is limited in a few ways.\n\n1. It is ture that \"image-text interleaved content is natively present in visual formats such as screenshots\". Screenshot is a scenario, however, in more cases, such as the very rich html format image-text interleaved data (much richer than screenshots), images and texts are naturally presented in different modalities. \n\n2. Is it really practical unifying them into pixels? In many cases, we have seperated texts and images, where we have to re-organize them in the form of \"screenshots\" to use the model. It can be redundant. And organizing them in the form of \"screenshots\" itself can involve some issues, such as the limitation from the resolution, etc. I agree that CLIPPO (Tschannen et al., 2023) demonstrates that the vision encoder can learn meaningful textual representation directly from pixels, however, \"it is feasible to do so\" does not mean it is a good solution in different scenarios. I am looking for a strong motivation to do so.\n\n3. In Tab. 6, simple alternatives like CLIP-V+T, and UniIR-CLIP are very effective when compared to Omni. That is also why I am considering if unifying them into pixels is a good solution and well-motivated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Section 5.2, do the authors only use the vision encoder of CLIP/OpenCLIP for evaluation? Why not use the full CLIP/OpenCLIp model? \n\n- Could the authors provide results on common benchmarks like MS-COCO (text-to-image retrieval), Flickr30k (text-to-image retrieval), and GLUE benchmark? Like what CLIPPO [1] did. The reviewer thinks this can better figure out what can/cannot OmniContrast do. \n - As said in the Weaknesses, all results of baselines are reproduced by the authors. Comparisons on common benchmarks make the evaluation more strong.\n\n- Another question is, why we would choose OmniContrast when there are many next-token-prediction VLMs? For example, the Emu series[2]. Such VLMs may be the mainstream now. The reviewer thinks these VLMs can also do what OmniContrast can do. Relevant discussions/comparisons are required.\n\n\n[1] https://arxiv.org/pdf/2212.08045\n\n[2] https://github.com/baaivision/Emu" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Clear presentation.\n\n- The evaluation of different methods on AnyCIR and SeqCIR seems sound.\n\n- The method is also straightforward, only a unified model saves the memory." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper develops OmniContrast to unify vision-language modeling from image-text interleaved web data. To evaluate such a unified model, the authors develop the AnyCIR and SeqCIR benchmarks. These two benchmarks focus on evaluating the relevant snippet retrieval ability of the model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The reviewer appreciates the development of benchmarks like AnyCIR and SeqCIR. One pitty is that the results of baselines are all reproduced by the authors. No third-party baselines are provided.\n\n- No results on common benchmarks are provided. In this case, the reviewer may think that OmniContrast is only developed for CIR, this specific task. It may discount the contribution of this work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I believe that the handling of interleaved data is a significant distinction between OmniContrast and CLIPPO. \n\nTherefore, I'm curious about the differences in the model's performance when using interleaved data compared to image-text pairs." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The model performs excellently, achieving outstanding results in multiple baselines.\n- Good writing and detailed experiments.\n- A novel and useful approach for transforming interleaved data into pixel space." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the OmniContrast model, which unifies vision and text into a pixel space for web document retrieval and understanding. Moreover, this paper presents three new information retrieval benchmarks (AnyCIR, SeqCIR, and CSR) to evaluate the ability of the model to retrieve continuous information in complex multi-modal documents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I'm not sure if I'm misunderstanding the model, but I think there is a lack of comparisons on some baselines, such as VQAv2 and GLEU like the comparisions in CLIPPO.\n- I think there is a lack of further discussion on the necessity and effectiveness of unifying text and images into pixel space, as well as a comparison of the differences between interleaved data and text-image pairs in this unified pixel space." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Does training a model in this omni-style make it easier or harder to converge?\n2. Related to Q1, do the authors believe that adding modalities helps the model learn each modality better, or does it make the training problem more complicated?\n3. What would happen to OmniContrast if there were abundant data in three modalities but limited data in the fourth modality?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Excellent ablation study demonstrating the necessity of including each modality in the proposed pipeline (Table 1).\n2. Clearly outperforms baseline methods, allowing the model to work in different modality settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "OmniContrast, a unified contrastive learning model for understanding vision, language, and vision-language interactions within multi-modal web documents. Unlike traditional models, OmniContrast:\n\n- Explores a new contrastive approach to maximize similarity between consecutive snippets from image-text interleaved web documents.\n- Unifies all modalities (text, images) into pixel space, rendering text visually, simplifying processing and representation.\n- Enables a single vision model to process any modality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Despite the proposed method outperforming CLIPPO in terms of average scores, it seems that the baseline method is capable of handling all modalities in OmniContrast. Clarification on the contribution is needed.\n\n2. Data augmentation of the training data is a crucial part of the pipeline, but it is not well-documented, raising concerns about synthesizing low-quality training samples.\n\n3. Figure 2: The images and fonts are extremely small, making it difficult to understand. The caption fonts also appear too small.\n\n4. The concept of omni-modality seems odd from a reading perspective, as it appears the authors are solving vision-language problems.\n\n5. In the abstract, \"OmniContrast unifies all modalities into pixel space, where text is rendered visually\" was difficult to understand until reading the entire introduction and related work section. The term \"rendering\" suggests high-resolution 3D scenes, whereas simple text copying and pasting is not truly rendering." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024omnicontrast,\ntitle={OmniContrast: Vision-Language-Interleaved Contrast from Pixels All at once},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3g7HuQ8avZ},\nnote={under review}\n}" }, "abstract": { "value": "In this work, we present OmniContrast, a unified contrastive learning model tailored for vision, language, and vision-language-interleaved understanding within multi-modal web documents. Unlike traditional image-caption data with clear vision-language correspondence, we explore a new contrastive fashion on maximizing the similarity between consecutive snippets sampled from image-text interleaved web documents. Moreover, to enable CLIP to handle long-form text and image-text interleaved content from web documents, OmniContrast unifies all modalities into pixel space, where text is rendered visually. This unification simplifies the processing and representation of diverse multi-modal inputs, enabling a single vision model to process any modality. To evaluate the omni-modality understanding of OmniContrast, we design three consecutive information retrieval benchmarks AnyCIR, SeqCIR, and CSR. Extensive experimental results demonstrate that OmniContrast achieves superior or competitive omni-modality understanding performance to existing standard CLIP models trained on image-text pairs. This highlights the potential of multi-modal web documents as a rich and valuable resource for advancing vision-language learning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "vision-language contrastive learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/54d36bfe2a047bed385ca766a9005cb3da640f71.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "OmniContrast: Vision-Language-Interleaved Contrast from Pixels All at once" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3gwNb8qZDr
Visual Prompting Reimagined: The Power of Activation Prompts
main
Active
visual prompt;parameter efficient finetuning;learning theory;generalization analysis
transfer learning, meta learning, and lifelong learning
3;3;3;6
4;5;5;4
2;2;3;3
2;2;1;3
1;2;3;3
3.75
4.5
2.5
2
2.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I believe this paper should be entirely rewritten and substantially revised." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is clearly written and easy to follow, with a well-structured presentation of the proposed ideas and theoretical analysis. The theory offers valuable insights into data complexity and the role of prompt application at different layers, which adds depth to the work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a generalized approach to visual prompting (VP) by enabling learnable prompts to be added at deeper layers of the model. They also introduce a theoretical framework that explores the relationship between data sample complexity and the layer depth at which prompts are applied." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The primary concern is the validation of the core premise. The authors present AP (their approach) as a generalized or extended version of VP. However, AP and VP do not operate under the same assumptions: VP is typically applied in a black-box setting, while AP requires a white-box model, as noted in Line 65. **This distinction is critical, as it suggests that AP may be more aligned with Visual Prompt Tuning (VPT, Jia et al.), a classic white-box method.** In this respect, AP might actually be a specific case of VPT rather than a true generalization of VP. This discrepancy raises concerns about whether the connection between VP and AP has been overstated, potentially to differentiate it from existing VPT approaches. Consequently, the novelty of AP appears limited, as its distinctions from VPT are not substantial.\n\n- The experimental validation of AP is also limited in comparison to VPT and related works, leaving questions about its empirical advantages. \n- Additionally, while the theoretical contributions are interesting, the connection between the theory and the design of AP is not sufficiently strong." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* This paper is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduced a generalized concept, called activation prompt (AP), which extends the scope of (input-level) VP by enabling universal perturbations to be applied to activation maps within the intermediate layers of the model. The authors also showed that AP is closely related to normalized tuning in CNNs and ViTs. Experiments are conducted on 29 datasets to demonstrate the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The writing of this paper could be largely improved. The majority of the claims made by the authors are not supported. Statements from line 57 to line 65 are way too handwavy. These claims are not supported by any experiments or theories. From line 67 to line 86, the authors spent huge effort explaining the difference between VPT and the proposed AP which is very confusing. Overall, it is very confusing what the authors try to convey in the introduction section. It is usually expected to see certain background and motivation and the relation to the proposed approach. \n\n* Limited novelty. Although the authors claimed that the proposed AP is very different from VPT, AP is essential identical to VPT, or in the authors words, a special case of VPT. AP is built on the claim that tradition VP only deals with input space, and AP deals also with intermediate features. Sadly, this claim is not true, since VPT-deep already studied adding prompts to intermediate features. That is to say, line 172-192 is already well studied by VPT.\n\n* No performance gain compared to baselines. In table 4, the reported performance of the proposed AP outperforms none of the listed baselines. The comparison of efficiency is simply meaningless with degraded performance. To the extreme extent, updating nothing gives worst performance with best efficiency. \n\n* Although the discussion of AP and normalization tuning shows something insights, this alone does not make much of a contribution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "There are a few concerns regarding the submission: 1. The claim that PT is inferior to fine-tuning is not entirely accurate. This work \"Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?\" provides a systematic analysis of both techniques, and the choice between PT and FT generally depends on the specific task and model size; 2. More empirical analysis on computational latency is needed. Since AP requires a larger parameter budget than PT, what are the associated training costs?; 3. The paper shows promising few-shot learning performance, but the reliance on pretrained model size and specificity may pose overfitting risks, especially with limited data." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "AP introduces a approach to visual prompting by expanding from input-based modifications to activation-level prompts, allowing for targeted, layer-specific customization that enhances performance and efficiency. This technique appears to improve VP's effectiveness and better adapts it to different model architectures. Extensive evaluations across diverse datasets and models, including CNNs and ViTs, underscore AP’s adaptability and robustness, establishing its versatility for various vision tasks. Furthermore, the paper provides theoretical insights into layer-specific behavior, clarifying how AP preferences vary across model layers and types." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper, Visual Prompting Reimagined: The Power of Activation Prompts, proposes a novel approach called Activation Prompting (AP) to enhance Visual Prompting (VP) for adapting pretrained vision models to new tasks. Unlike traditional VP, which modifies the input data, AP applies perturbations to intermediate activation maps within the model, effectively broadening VP's scope. AP enables deeper customization by focusing prompts on specific model layers, allowing it to adapt based on model type and layer sensitivity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are a few constrains for the work. 1. AP requires white-box access to the model's internal layers, limiting its applicability in scenarios where only black-box access is available, such as with proprietary models. 2. AP's effectiveness is not as pronounced in smaller models like ResNet-18 or ViT-Tiny, as noted in the paper. This limits its versatility, particularly for applications relying on compact or resource-constrained models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see the Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is easy to follow, the research topic is interesting, especially when considering the current storage in exploring prompt tuning.\n\nThe figures intuitively showcase the proposed method, and the findings are interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "As visual prompting nowadays becomes a popular method to repurpose pretrained vision models for adaptation. The authors highlight a noticeable performance gap between VP and conventional fine-tuning methods. In this case, they introduce AP, extending the scope of (input-level) VP by enabling universal perturbations to be applied to activation maps with in the intermediate layers of the model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The main experiments are conducted on ResNet-101 and ViT-Large/16, which are not the common practices, especially when considering AP is compared with VPT. In appendix, it is good to see that the authors report results on ViT-B/16 (Table A2). However, when looking into the performance itself, AP is not surprising to have a not satisfying results (i.e., with middle-level parameter usage and middle-level performance). (Also, here should be FGVC not FGCV). This further rise my questions on the contribution of AP (see 2).\n\n2) The contribution/logic of this paper is poor, AP is more like a variant of VP in any sense. In Line 67-86, the authors discussed the difference between AP and VPT, as VPT applies prompt across multiple layers. In VPT paper page 19 sharing prompts, the authors clearly stated that they had initial exploration on weight sharing across layers. In this sense, AP is more like an observation-based variant of VPT. The discussion on VPT is still fundamental, however, further efficiency concerns might mislead the community (see 3).\n\n3) In Line 245, the authors observed that ResNets and ViTs are exhibiting contrasting layer preferences for AP. During training, does that mean I should use grid search to go through all layers in order to find the best layer index? Figure 4 further proves my thoughts, as the layer index varies, the performance changes significantly, potentially leading to unstable training. The observations are interesting, however, the clear separation stated by the authors might mislead the prompt tuning research. In this sense, I do not think this paper is qualified for publication.\n\nSuggestions: as the observations are interesting, the authors might think of completely reclaiming their current statement. Unfortunately, right now, I do not see fundamental changes/contributions at the structural level." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose activation prompting to analyze, understand, and advance the conventional visual prompting for parameter efficient transfer learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024visual,\ntitle={Visual Prompting Reimagined: The Power of Activation Prompts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3gwNb8qZDr},\nnote={under review}\n}" }, "abstract": { "value": "Visual prompting (VP) has emerged as a popular method to repurpose large pretrained models for downstream vision tasks. Unlike many parameter-efficient fine-tuning (PEFT) techniques that modify model parameters, VP introduces a universal perturbation directly into the input data to facilitate task-specific fine-tuning while keeping the pretrained model intact. However, there exists a noticeable performance gap between VP and conventional fine-tuning methods, highlighting an unexplored realm in theory and practice to understand and advance VP to close its performance gap. Towards this end, we introduce a novel concept, termed activation prompt (AP), which extends the scope of input-level VP by enabling universal perturbations to be applied to activation maps within the intermediate layers of the model. With the aid of AP, we unveil the intrinsic limitations of VP in both performance and efficiency. We also show that AP shares a close connection to normalization tuning used in convolutional neural networks (CNNs) and vision transformers (ViTs), albeit with variations in layer preferences for prompting. We theoretically elucidate the rationale behind such preference by analyzing global features across layers. By conducting extensive experiments across 29 datasets and various model architectures, we provide a thorough performance analysis of AP, comparing it with VP and PEFT baselines. Our experimental results demonstrate that AP significantly surpasses the input-level VP in terms of both accuracy and efficiency, considering factors like time, parameters, memory usage, and throughout." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "visual prompt", "parameter efficient finetuning", "learning theory", "generalization analysis" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a5bc9565d169cc972ec8593f047d16365380f4c0.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b7dea0332a004619e4c4a3e6e3e1e854155f4821.zip" }, "title": { "value": "Visual Prompting Reimagined: The Power of Activation Prompts" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3hc2ESNU6n
Training-free Long Video Generation with Chain of Diffusion Model Experts
main
Withdraw
generative models;diffusion models;video generation
generative models
Wenhao Li;Yichao Cao;Xiu Su;Xi Lin;Shan You;Mingkai Zheng;Yi Chen;Chang Xu
~Wenhao_Li14;~Yichao_Cao1;~Xiu_Su1;~Xi_Lin4;~Shan_You3;~Mingkai_Zheng1;~Yi_Chen18;~Chang_Xu4
3;5;5;5
4;5;5;5
2;3;3;2
1;3;3;2
3;4;3;2
4.5
4.75
2.5
2.25
3
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Provide some results that exhibit better dynamics.\n2. How does Confiner process videos of different resolutions if the models used are trained with different resolutions?\n3. If I want to generate some special videos, but some of the text-to-video models lack the ability to generate such kinds of videos, how can this be resolved? For example, I want to use a LoRA checkpoint with some special cartoon character for the text-to-image model. The other text-to-video model can generate such character structures or motions. How can this be resolved?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper proposes a novel framework that utilizes both ready-made text-to-video and text-to-image models to perform video generation.\n2. The experimental results show that ConFiner can generate higher quality videos on both objective and subjective metrics with a 10% reduction in costs.\n3. ConFiner-Long can generate consistent and coherent videos up to 600 frames." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ConFiner, a model that decouples the video generation task into distinct sub-tasks: structure control, spatial-temporal refinement. It employs three pre-existing diffusion experts, each responsible for a specific task, thereby reducing the overall burden on the model and enhancing both the quality and speed of video generation. The paper further proposes a method of coordinated denoising, enabling two experts with different noise schedulers to collaborate on a timestep basis during the video generation process. Expanding on the ConFiner framework, the paper outlines three strategies: a consistency initialization strategy, a staggered refinement mechanism, and coherent guidance, which together aim to construct ConFiner-long, a model designed to generate long videos." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The generated videos exhibit low dynamics. It seems that ensuring video consistency is quite conflicting with achieving dynamics.\n2. The contribution may be considered weak, as it heavily relies on other works, and some current video generations have presented much better video generation capabilities.\n3. The core idea of splitting video generation into three stages is reasonable, but there lacks more analysis on why it must be split into three stages specifically." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see my concerns in the weakness part." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written and easy to follow. The authors conduct extensive experiments to support the claims in the paper.\n2. The paper proposes ConFiner by decomposing video generation into three subtasks. Multiple expert diffusion models are employed.\n3. Coordinated denoising is proposed to allow two experts on different noise schedulers to collaborate timestep-wise in the video generation process.\n4. The proposed method supports longer video generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a framework for long video generation by ensembling multiple diffusion models. Video generation is decomposed into spatial and temporal parts. T2V models are employed for control experts and temporal experts, and T2I models are employed for spatial experts. The authors utilize the timestep 0 as the connection to better ensemble different diffusion model experts. The proposed method can generate more consistent video frames." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method can generate longer videos with more frames. However, only the number of frames increases, neither the content nor the dynamics of generated videos increases. From Fig. 1, the motions of StreamT2V are larger. Also from examples on the project page, the motion of the proposed method is small. Therefore, the video itself is not long indeed.\n2. The most significant contribution of this paper is the coordinated denoising. It is to use the timestep 0 as the connection, which requires denoising the latent to timestep 0 in the intermediate steps. It increases the computation costs. Furthermore, this technique is more like a trick.\n3. In the experiment part, the comparison methods are not state-of-the-art. The author should compare with more state-of-the-art methods.\n4. The technical contributions of the paper are not significant enough." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see above. If the author solves my problems, I will consider raising the score. Thanks." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper discusses how to generate high-quality videos with already trained models, which is a very interesting topic. \n2. The structure of this paper is well-organized and easy to follow. \n3. The experimental results show the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes ConFiner that decouples the video generation task into three sub-tasks. For each sub-task, ConFiner uses three off-the-shelf diffusion models as experts. Furthermore, ConFiner proposes coordinated denoising, which can allow different diffusion models collaborate timestep-wise in video generation process." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are some questions.\n1. On line 283, the author claims that \"both schedulers share the same distribution at timestep 0.\" However, the distribution at timestep 0 corresponds to that of the training dataset. Typically, the training datasets used for T2V (Text-to-Video) and T2I (Text-to-Image) are not identical, so this statement is somewhat inaccurate. I suggest the authors provide additional insights into the choice of using timestep 0 as the anchor for the generated image or video.\n2. The authors add a certain amount of noise to the video or image generated at each stage when using each expert. I am curious whether the final generated video retains any connection to the original structured video.\n3. In the section on the consistency initialization strategy (from line 313 to line 317), does the author use the same initial noise for each short video clip, with only the frame order randomized in each initial noise? If so, would this lead to repetitive content in the subsequently generated videos?\n4. From lines 348 to 361, the authors use L2 loss to calculate the difference between the current noise and the previous segment noise. However, according to the consistency initialization strategy, the noise is predefined. This raises some confusion—why is it necessary to further optimize the noise input to the model?\n5. In the video demo, I observed that ConFiner generates smoother videos. Additionally, compared to the base T2V model, the colors in ConFiner’s videos tend to appear more grayish." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed.", "Yes, Other reasons (please specify below)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. The writing of the paper needs to correspond to the training-free long video generation in the title, including motivation, related work, method, and experiments.\n2. In the related work, the omission of these most related studies is puzzling. \n3. The authors should explain how their method is different from current methods and what makes it stand out, including ConFiner and ConFiner-Long" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "a. Clarity and Simplicity: The approach presented is straightforward, and the method section is generally clear and easy to follow.\nb. Comprehensive and Convincing Experiments: The experiments are thorough, with results showing that ConFiner effectively improves video quality and coherence compared to existing models.\nc. Long Video Generation Capability: ConFiner-Long introduces three strategies that help maintain consistency and smooth transitions in longer videos, allowing for the generation of high-quality videos with extended frame lengths." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ConFiner, a method that uses image diffusion models to improve video generation. ConFiner first uses a text-to-video model to generate a rough structure of the video. Then, noise is added to this video, and it's passed through image generation models for spatial refinement and the video model for temporal refinement. They also propose ConFiner-Long, which is designed for making long videos by using three strategies to keep segments consistent and transitions smooth. Experimental results show that ConFiner improves video quality, and ConFiner-Long successfully generates longer videos." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The title suggests a focus on “training-free long video generation”, but the main content is mainly about introducing ConFiner’s ability to enhance video quality. And most experiments also focus on ConFiner’s improvements, creating a bit of a disconnect between the title and the paper's main content.\n2. Limited Novelty: ConFiner’s approach of using T2I models to enhance T2V quality isn’t new. Similar ideas have already been explored in works like VideoElevator[1]. This reduces the novelty of the proposed method.\n3. Missing Related Work: The paper is notably lacking in its discussion of long video generation and related work using T2I as a video generation refiner, such as [1,2,3] This aspect is vital as it forms the basis of the research's motivation. The omission of these most related studies is puzzling. \n4. The experiments mainly focus on ConFiner's comparison and analysis and lacks comparison with existing long video generation methods, like StoryDiffusion, StreamingT2V, SEINE. \n5. The ablation study on three strategies of ConFiner-Long is missing quantitative results. Fig.4 cannot fully prove the effectiveness of three strategies.\n\n\n[1] VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion Models\n[2] StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation\n[3] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nli2024trainingfree,\ntitle={Training-free Long Video Generation with Chain of Diffusion Model Experts},\nauthor={Wenhao Li and Yichao Cao and Xiu Su and Xi Lin and Shan You and Mingkai Zheng and Yi Chen and Chang Xu},\nyear={2024},\nurl={https://openreview.net/forum?id=3hc2ESNU6n}\n}" }, "abstract": { "value": "Video generation models hold substantial potential in areas such as filmmaking. However, current video diffusion models need high computational costs and produce suboptimal results due to high complexity of video generation task. In this paper, we propose \\textbf{ConFiner}, an efficient high-quality video generation framework that decouples video generation into easier subtasks: structure \\textbf{con}trol and spatial-temporal re\\textbf{fine}ment. It can generate high-quality videos with chain of off-the-shelf diffusion model experts, each expert responsible for a decoupled subtask. During the refinement, we introduce coordinated denoising, which can merge multiple diffusion experts' capabilities into a single sampling. Furthermore, we design ConFiner-Long framework, which can generate long coherent video with three constraint strategies on ConFiner. Experimental results indicate that with only 10\\% of the inference cost, our ConFiner surpasses representative models like Lavie and Modelscope across all objective and subjective metrics. And ConFiner-Long can generate high-quality and coherent videos with up to 600 frames." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Wenhao_Li14", "~Yichao_Cao1", "~Xiu_Su1", "~Xi_Lin4", "~Shan_You3", "~Mingkai_Zheng1", "~Yi_Chen18", "~Chang_Xu4" ] }, "authors": { "value": [ "Wenhao Li", "Yichao Cao", "Xiu Su", "Xi Lin", "Shan You", "Mingkai Zheng", "Yi Chen", "Chang Xu" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generative models", "diffusion models", "video generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "li|trainingfree_long_video_generation_with_chain_of_diffusion_model_experts" }, "pdf": { "value": "/pdf/62fe9cdb0c4e4a640f28ae22bf0867f590ed2f9c.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6e2226f8b5f4ca9b4a9788025382fe717b612dbe.zip" }, "title": { "value": "Training-free Long Video Generation with Chain of Diffusion Model Experts" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3i13Gev2hV
Compositional Entailment Learning for Hyperbolic Vision-Language Models
main
Active
Vision-Language Models;Hyperbolic Geometry;Representation Learning;CLIP
unsupervised, self-supervised, semi-supervised, and supervised representation learning
8;8;8;8
3;3;4;3
3;3;4;3
3;3;3;3
4;3;3;4
8
3.25
3.25
3
3.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Can the authors share the results of HyCoCLIP on RedCaps dataset?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The proposed method is simple and elegant and can be easily applied to large scale pretraining of vision-language models. The procedure to automatically generate paired image and text boxes is also relatively straightforward.\n* The empirical results show improvement across several tasks which demonstrates the improved representation learning - classification, retrieval, detection and understanding.\n* Table 1 results show that CLIP trained on additional image-text boxes doesn't improve the performance. However, training on the same data but with the proposed hierarchical compositional learning losses shows significant improvement in performance. This further demonstrates the effectiveness of the proposed technique." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a novel learning method for training vision-language models. Specifically, the method involves pretraining such models with 2 losses --- hierarchical compositional contrastive and entailment losses. The hierarchical concepts correspond to image boxes and the corresponding text boxes. The experiments are conducted on large scale dataset (GRIT) consisting of 20.5M image-text pairs. In Appendix A, the authors describe an automatic procedure to obtain the text boxes (noun entities in this case) and their corresponding bounding boxes in the images. The paper details empirical results on a variety of tasks including zero-shot image classification, retrieval, object detection and scene understanding." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "When training CLIP on additional image-text boxes shows no improvement (Table 1), it could be because there is limited new information in such examples (as original image-text pairs are already present in the training data). For a better understanding of this, an experiment such as this might help: split the GRIT dataset into 2 random subsets of 10M each. Then compare the results on the following settings:\n\n[1] CLIP trained on 10M image-text pairs\n\n[2] CLIP trained on 10M image-text pairs + additional image-text boxes\n\n[3] HyCoCLIP trained on 10M image-text pairs + additional image-text boxes\n\n[4] CLIP trained on 20M image-text pairs\n\nThe paper presents the comparison of [1] vs [2] vs [3] (but on all 20M image-text pairs) in Table 1 but comparing [3] vs [4] will help answer the above question. It is worth noting that even if the comparison shows similar results, [3] might still be slightly favored since it can be applied on top of any existing large dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could you please provide more details on the choice of hyperbolic space parameters?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. This paper is well-organized. The motivation is easy to follow, and the method is easy-to-understand.\n2. The proposed HyCoCLIP is novel and effective. It organizes data at multiple abstraction levels, providing an inspiring approach to multi-modal learning.\n3. The authors performs exhaustive experiments to show that the effectiveness of HyCoCLIP. It outperforms baselines on general and fine-grained image classification tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel approach named HyCoCLIP to vision-language modeling that leverages the hierarchical nature of hyperbolic space to better align visual and textual data. It organizes image and text data as hierarchical compositions, where objects within an image and their corresponding text descriptions are represented at different levels of abstraction in hyperbolic space.The experiments demonstrate that HyCoCLIP achieves significant performance improvements across multiple tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper compare with CLIP and MERU, it should also compare some recently proposed VLMs.\n2. The paper should explore how sensitive the model is to the choice of hyperbolic space parameters." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Table 1/2, the authors bold the best performance overall across different model backbones. Wouldn’t it be more informative and fair to bold the best performance within each backbone group (e.g., ViT-S/16, ViT-B/16) to allow for a clearer comparison of HyCoCLIP’s performance relative to baselines on similar architectures?\n\nRegarding the choice of batch size, the authors used a batch size of 768 due to memory limitations. Did the authors consider implementing techniques like gradient accumulation to effectively simulate a larger batch size? This could provide further insights into how batch size impacts model performance, especially since batch size has been shown to affect contrastive learning tasks significantly." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I think this paper is very well written and I find it easy to follow. Overall the idea behind HyCoCLIp is well motivated and I believe the authors have conducted sufficient experiments to empirically demonstrate the proposed method and model’s efficacy. The empirical performance of HyCoCLIP is very strong and to the best of my knowledge, the proposed HyCoCLIP achieved the state-of-results on many of the reported zero-shot tasks from a contrastive-pretrained model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors proposed to incorporate hierarchical pretraining for hyperbolic vision language models and the resulting model Hyperbolic Compositional CLIP (HyCoCLIP). The core idea is to construct object regions (image boxes) and corresponding text phrases (text boxes) to build a multi-layered, compositional hierarchy within the shared hyperbolic embedding space. The HyCoCLIP shows competitive performance in zero-shot classification and retrievals. The author also conducted experiments to show how HyCOCLIP can outperform CLIP and the hyperbolic contrastive model MERU in zero-shot hierarchical classification and scene understanding tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One major concern is the incremental nature of this work. Hyperbolic embeddings for representing hierarchical relationships have been explored in previous models, and this paper primarily builds upon these established ideas. However, the specific contributions of HyCoCLIP, particularly in enhancing hierarchical and scene understanding tasks, offer sufficient merit to make this work valuable to the broader community." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Could the use of objects as supervision bias the model towards nouns and concrete concepts, possibly at the expense of attributes, dynamic actions (verbs), etc.?\n\nSome details that are unclear from Supp. A: How were abstract nouns filtered? Are the nouns that can be grounded open-vocabulary (not limited to a fixed list)? How accurate is the GLIP-based grounding procedure?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The central idea is clever and novel – utilizing the hierarchical nature of nouns mentioned in image captions as supervision for a hyperbolic model. The exposition is clear and concepts are well-illustrated. The quantitative experiments are extensive and overall convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the novel Compositional Entailment Learning framework to train VLMs, by using as supervision the hierarchical relations between images, captions, and constituent nouns and their bounding boxes. Their results show that this outperforms standard CLIP and the hyperbolic CLIP variant MERU on both standard multimodal and hierarchical benchmarks. This is supported by qualitative results illustrating the learned hierarchical semantics of the learned space." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Qualitative results (Sec 4, Supp 8) are fairly limited. In particular, it is missing a qualitative comparison to existing models (CLIP, MERU) to illustrate whether HyCoCLIP’s embedding space represents hierarchies in a more qualitatively satisfying way.\n\nWhile a comparison to CLIP trained from scratch is provided, recent work has found pretrained foundation VLMs to represent hierarchies in Euclidean space [1]. It would be useful to compare to such results to understand whether HyCoCLIP trained from scratch is competitive with such models.\n\n[1] Alper and Averbuch-Elor. Emergent Visual-Semantic Hierarchies in Image-Text Representations. ECCV 2024" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We explore the benefits brought in when using visual-semantic compositional hierarchies for learning hyperbolic representations through unsupervised contrastive training." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024compositional,\ntitle={Compositional Entailment Learning for Hyperbolic Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3i13Gev2hV},\nnote={under review}\n}" }, "abstract": { "value": "Image-text representation learning forms a cornerstone in vision-language models, where pairs of images and textual descriptions are contrastively aligned in a shared embedding space. Since visual and textual concepts are naturally hierarchical, recent work has shown that hyperbolic space can serve as a high-potential manifold to learn vision-language representation with strong downstream performance. In this work, for the first time we show how to fully leverage the innate hierarchical nature of hyperbolic embeddings by looking beyond individual image-text pairs. We propose Compositional Entailment Learning for hyperbolic vision-language models. The idea is that an image is not only described by a sentence but is itself a composition of multiple object boxes, each with their own textual description. Such information can be obtained freely by extracting nouns from sentences and using openly available localized grounding models. We show how to hierarchically organize images, image boxes, and their textual descriptions through contrastive and entailment-based objectives. Empirical evaluation on a hyperbolic vision-language model trained with millions of image-text pairs shows that the proposed compositional learning approach outperforms conventional Euclidean CLIP learning, as well as recent hyperbolic alternatives, with better zero-shot and retrieval generalization and clearly stronger hierarchical performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Vision-Language Models", "Hyperbolic Geometry", "Representation Learning", "CLIP" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0b897dd302e368adf23f349ccbd01ddd0db0f53f.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Compositional Entailment Learning for Hyperbolic Vision-Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3i4OShnmnG
Gradient-Free Adversarial Attack on Time Series Regression: Targeting XAI Explanations
main
Active
Adversarial attacks;Explainable artificial intelligence;Time series regression;Robustness
interpretability and explainable AI
3;3;3;6
3;4;3;5
2;2;3;4
2;1;2;4
2;3;2;1
3.75
3.75
2.75
2.25
2
0.870388
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As shown in the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper is trying to solve a critical question in the XAI robustness domain, which is well-motivated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a gradient-free adversarial attack method designed to target non-differentiable XAI techniques in time series regression problems. The author propose a novel gradient-free adversarial attack method specifically designed for time series explanations, targeting non-differentiable XAI techniques. The paper also introduces a Dynamic Time Warping (DTW) based objective function and a local attack strategy to enhance the effectiveness of the attack on time series data. The experiments conducted across three black-box models and two time series datasets demonstrate the vulnerability of current non-differentiable XAI methods and show the superiority of the proposed approach over existing attack methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper structure is poor and the not well-organized. Too much pages are used on related work and preliminary. The paper writing is not standard.\nThe methods seems to be lack of novelty. PSO is an existing method for black-box attack. The proposed method uses DTW of explainable result of X and X_adv as loss function. \nThe whole experiment setup is not very clear. There is no baseline comparison. No results to support the effectiveness of proposed methods. Table1 evaluated the robustness of different XAI models under DTW attack objective, but this is not what you what to show. What you want to show in this paper is the effectiveness of your method compared to other attack methods. Table 2 compared different objectives, but still cannot show the effectiveness of DTW loss. Your experiments cannot support your claims in the contribution. \n\nThe author employed three black-box models for time series classification: Transformer, TCN, and LSTM with input cell attention. However, these models are not the SoTA method for time series classification, the author may focus on more advanced models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The reasoning behind using DTW as the fitness function is missing. Given that differences between two time series explanations should ideally be compared step-by-step, the cumulative distance measured by DTW may not align well with the objective of perturbing XAI methods effectively. A more in-depth analysis on the rationale for incorporating DTW would be beneficial.\n- Minor typos:\n - In line 80-81, the figure reference is missing." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- **Novel research problem**: Black-box adversarial attacks against XAI methods for time series regression models have not yet been extensively studied.\n- **Well-written paper**: The paper is well-organized and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a black-box adversarial attack on Explainable Artificial Intelligence (XAI) methods for time series regression models. Previous studies on XAI attacks have primarily focused on white-box settings and models in computer vision. However, attacks on time series models in black-box settings remain largely unexplored. To address this gap, the authors adapt the Particle Swarm Optimization (PSO) black-box optimization algorithm for such attacks. Specifically, they initialize the algorithm with the original time series instead of zeros to improve local search performance. They also employ Dynamic Time Warping (DTW) as the objective function for PSO. Experimental results on several combinations of models and XAI methods demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Lack of justification for methodology design**: The choice of DTW in PSO is not explained. It appears to be selected solely because of its improved performance over top-K or center of mass approaches. See Question 1 for further details.\n- **No consideration of defense mechanisms**: The authors do not discuss potential defenses that could detect or reject adversarial examples.\n- **Significant adversarial perturbation**: In Figure 2, the generated adversarial examples deviate significantly from the original samples, making them potentially easy to detect with defense methods.\n- **Limited theoretical or technical contribution**: Given the weaknesses noted above, the paper’s contribution to attacking XAI methods appears limited in terms of theoretical or technical advancements. Overall, it reads more as an application of black-box adversarial attacks on XAI methods for time series regression models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The same as weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- it is interesting to use PSO to solve XAI problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel gradient-free adversarial attack method to test the robustness of Explainable AI (XAI) explanations for time series regression problems. The proposed method uses Particle Swarm Optimization to generate adversarial examples without needing gradient information, making it more effective for real-world scenarios and non-differentiable XAI techniques." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- limited novelty. This paper only uses DTW as the distance for PSO. \n- limited experiments. The baselines and datasets for Figure 1 and Figure 2 are not enough. \n- why do the authors use DTW as the distance between two time series? could authors provide a theoretical analysis about what properties of DTW make it optimal compared with other distances, such as MAE (RMSE), cosine similarity?\n- The authors claim that people can easily detect subtle perturbation in Line 79 and provide a figure to validate it. However, in this figure, the time series is a smooth periodic function (sine function), and it is the smoothness and period that make the perturbation so obvious. In common time series, these good properties may not exist and noise would be everywhere. Could you use a time series in one real-world dataset, such as Traffic/Weather, to draw the same figure? let us see whether we can have the same conclusion then (unnoticeable in image but obvious in time series).\n- typos: Figure reference is broken in line 80." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "I am willing to increase my score if the paper's presentation is significantly improved.\n\n1. Rephrase \"XAI explanations\" (6 times in the paper), which sounds like a pleonasm. \"XAI methods\", \"XAI techniques\" make more sense.\n2. Fix Eq.(4). where $v()$ is not defined and $f(S)$ makes no sense. I suggest to define $v()$ using $f()$ and use this $v()$ in Eq.(4).\n3. I am confused about the use of $X$ and $x$. L182: Do you mean to write $\\pi_X$ instead? L219: Here $x$ is introduced, but $X$ was used in the previous section; please unify the notation.\n4. How is $I(x) \\cap I(x')$ defined? Why is there a minus sign, but the metric's range is [0, 1]?\n5. L304: use a different letter than $f$ to denote $M_f$, which was earlier used to denote the model function $f$.\n6. Report model predictive performance results on training and test sets (3 models x 2 datasets).\n7. L479: what is the \"KS explanation\"? Do you mean \"SHAP\"?\n8. Where is the \"global attack\" (evaluated in Sec. 5.5) described exactly? Please clarify it in Section 4.\n9. Please define a threat model under which the attacker operates. For example, what can be accessed by an attacker: an input sample, a dataset, a neural network model? See [1-5] for a few examples of discussing such a threat model in different papers on adversarial ML:\n [1] Glaze: Protecting artists from style mimicry by text-to-image models\n [2] Extracting training data from diffusion models\n [3] Extracting training data from large language models\n [4] RAB: Provable robustness against backdoor attacks\n [5] Local model poisoning attacks to byzantine-robust federated learning, etc.\n\n### Other feedback\n- L49: typo in \"unchanged.(Ghorbani et al., 2019).\"\n- L53: missing space in \"method(Huang et al., 2023)\"\n- L80: typo in \"Fig.??,\"\n- L106: missing full stop between \" Sec.3 Sec.4\" \n- \"Locally Interpretable Model-Agnostic Explainer\" should be \"Local Interpretable Model-agnostic Explanations\"\n- L172: missing space in in \"X,LIME\"\n- L221: missing comma in \"Then, the adversarial\" \n- Eq.(5) clarify that you write $I(x, f)$ to emphasize explaining model f. Instead, you could also write $I_f(x)$ or $I(x; f)$ \n- The title of Section 3.3 is capitalized, while the titles of Sections 3.1 & 3.2 were not.\n- L239: missing \"s\" in \"It calculate the\"\n- Eq.(6) write \"\\mathcal{D}_{\\mathrm{top-k}}\" instead (also in L451 etc.)\n- The title of Section 4 is not capitalized, while the titles of Sections 2 & 5 are capitalized.\n- L320: missing spaces in \"factorc1\" and \"factorc2.\"\n- Typo in the title of Section 5.4. – do you mean objective functions?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The idea of using PSO to optimize for attacking explanations is interesting. Usually, in related papers, unrealistic assumptions are made about the white-box access to the model's weights.\n2. Focusing on XAI for time series regression is very original.\n3. It is commendable that the experiments already span four diverse explanation methods (LIME, SHAP, Saliency, SmoothGrad) and three model families (LSTM, TCN, Transformer), which show valuable comparisons.\n4. The paper is easy to read; figures and tables are appropriate." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a black-box attack to manipulate the output of explanation methods for time series regression. It relates to previous work on crafting adversarial examples for explanations of image classification using gradient-based optimization methods. Both the solution of using PSO, and the setting of time series regression, are novel in this line of work. Extensive experiments with 3 models (LSTM, TCN, Transformer) and 4 XAI methods show that popular algorithms are vulnerable to adversarial examples, which undermines their applicability, and facilitates future work on robust explanations in time series." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Experiments.** PSO is a random algorithm. How many random repetitions were initiated in experiments? Are metric values reported in Tables 1 & 2 aggregated means? Please add standard deviations to the analysis.\n2. **Code.** To the best of my knowledge, the paper is not supplemented with code that implements the method and allows to reproduce the experimental results. Can you share the code, e.g. on Anonymous GitHub?\n3. Overall, the **presentation** is poor (see suggestions below), and I count on it being fixed not to obfuscate the valuable contribution." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A new attack method against XAI on time series regression problems, demonstrating the vulnerability of XAI methods." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024gradientfree,\ntitle={Gradient-Free Adversarial Attack on Time Series Regression: Targeting {XAI} Explanations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=3i4OShnmnG},\nnote={under review}\n}" }, "abstract": { "value": "Explainable Artificial Intelligence (XAI) sheds light on the decision-making ground of black-box models by offering explanations.\nThese explanations need to be robust for trustworthy time series regression applications in high-stake areas like medicine or finance, which yet remains largely unexplored.\nFurthermore, most adversarial attack methods currently rely on white-box strategies, which require access to gradient information from both the model and the XAI method. In real-world scenarios, such information is often difficult or impossible to obtain.\nTo address these challenges, we propose a novel gradient-free adversarial attack method specifically designed for time series explanations, targeting non-differentiable XAI techniques.\nTo enhance the effectiveness of our method for time series data, we introduce an attack objective function based on Dynamic Time Warping (DTW).\nAdditionally, we implement an explanation-based local attack strategy, which ensures that the adversarial perturbations remain imperceptible within the time series data.\nIn our experiments, we generate adversarial examples to attack four different XAI methods across three black-box models, using two time series datasets.\nThe results reveal the vulnerability of current non-differentiable XAI methods.\nFurthermore, by comparing our approach with existing attack methods, we demonstrate the superiority of our proposed objective function and local attack strategy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Adversarial attacks", "Explainable artificial intelligence", "Time series regression", "Robustness" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/588c8fa184798b6cdcb58dc5f6d25a055d407e2a.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Gradient-Free Adversarial Attack on Time Series Regression: Targeting XAI Explanations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
3iGponpukH
ScalePerson: Towards Good Practices in Evaluating Physical Adversarial Attacks on Person Detection
main
Withdraw
Physical Adversarial Attack;Person Detection;Dataset
datasets and benchmarks
Hui Wei;Yuanwei Liu;Xuemei Jia;Baraa Al-Hassani;Manhuen Zhang;Joey Tianyi Zhou;Zheng Wang
~Hui_Wei2;~Yuanwei_Liu4;~Xuemei_Jia1;~Baraa_Al-Hassani1;~Manhuen_Zhang1;~Joey_Tianyi_Zhou1;~Zheng_Wang14
3;5;5;6
5;3;4;3
2;2;3;3
1;2;2;3
3;2;4;3
4.75
3.75
2.5
2
3
-0.899229
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please refer to the weakness part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This work provides a novel dataset designed for studying physical adversarial attack. The dataset consists of person images with an uniformly distributed scale, while the existing datasets (INRIAPerson, COCOPerson) do not.\n2. The presentation is good.\n3. This work provides the extensive experimental results comparing the various adversarial attack methods between datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a novel dataset and benchmark for physical adversarial attacks on person detection task, focusing on fair comparison regarding various factors such as scale, orientation, cameras, etc. Also, this work suggests an evaluation metrics: Average Precision (AP) and Attack Success Rate (ASR) for benchmark. With the dataset and benchmark, the authors conduct an extensive evaluation with various attack methods and detectors across the existing and novel datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In the SCALEPERSON dataset, the Average Precision (AP) for both benign and attacked settings appears to be too high, with small variance in scores across methods, except for AdvPatch and T-SEA. In other words, the proposed dataset seems too easy (to detect person), lacking the discriminative power needed to serve as an effective benchmark. The dataset is supposed to contain more dynamic scenes.\n\n2. As shown in Table 3, the influence of scene type varies across different attack methods. Therefore, to enable a fair comparison, the proportion of indoor and outdoor scene images is supposed be more balanced, as is the case with the distribution of camera types.\n\n3. The advantage of using Attack Success Rate (ASR) as a metric is not clearly explained, for example, in comparison to Average Precision (AP).\n\n4. The ASR metric only accounts for detector false negatives (FNs, missed detections) caused by physical adversarial attacks and does not consider detector false positives (FPs). However, physical adversarial attacks also appear to cause detection FPs, as shown in Figure 2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Pls see the weaknesses above" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. SCALEPERSON is the first dataset designed to address the uneven distribution of person scales in existing datasets, which is crucial for evaluating the effectiveness of adversarial attacks across different scales.\n2. The benchmark includes standardized evaluation metrics and a modular codebase that allows for transparent and reproducible assessments of attack effectiveness.\n3. The authors conduct an extensive evaluation of 11 state-of-the-art attacks against 7 mainstream detectors across 3 datasets, providing multidimensional quantitative analysis.\n4. The analysis uncovers deficiencies in current methods and offers novel insights to inspire future technological advancements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of evaluating physical adversarial attacks on person detection systems. The main issues highlighted are the lack of consistent experimental setups and ambiguous evaluation metrics that hinder fair comparisons, and the absence of a dedicated dataset designed for assessing physical adversarial attacks, leading to evaluations on datasets not ideally suited for this purpose.\n\nThe authors propose SCALEPERSON, the first dataset specifically designed for evaluating physical adversarial attacks in person detection. This dataset incorporates critical factors such as person scale, orientation, number of individuals, and capture devices, providing a more realistic and challenging testbed for evaluating such attacks. Additionally, they introduce a comprehensive benchmark with standardized evaluation metrics and a modular codebase to enhance reproducibility and transparency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While SCALEPERSON addresses the issue of uneven person scale distribution, it may not cover all possible real-world scenarios, which could limit the generalizability of the findings. The collection and use of images in the dataset must adhere to strict ethical guidelines to ensure personal privacy is not compromised.\n2. The effectiveness of the benchmark relies on the selection of attack methods included. If certain effective attacks are not considered, the benchmark may not fully represent the threat landscape." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Originality: The paper introduces SCALEPERSON, a novel dataset specifically designed for evaluating physical adversarial attacks on person detection systems\n2. Quality: The paper features a comprehensive benchmark that systematically evaluates 11 state-of-the-art attack methods against 7 mainstream detectors on 3 datasets of person detection, ensuring robust and detailed analysis.\n3. Clarity: The writing is clear and well-structured, effectively communicating the purpose and methodology behind the dataset and benchmark.\n4. Significance: The introduction of SCALEPERSON advances the field by providing a resource for evaluating person detection systems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript introduces SCALEPERSON, a novel dataset designed to evaluate physical adversarial attacks on person detection systems. Addressing limitations in existing evaluations—such as inconsistent setups and lack of a dedicated dataset—the paper establishes a comprehensive benchmark that standardizes evaluation metrics and includes critical factors like person scale, orientation, number of individuals, and capture devices. The benchmark assesses 11 state-of-the-art attack methods against 7 mainstream detectors across 3 datasets, totaling 231 experiments, providing detailed insights into the efficacy of these attacks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This work focuses solely on physical attacks on person detection, which limits its generalizability and practicability, as both object detection (such as the adopted detectors) and physical attacks typically involve multiple object classes, not just persons.\n2. I doubt the reasonableness of the claim that the number of persons in different scales should be evenly distributed in a dataset. Intuitively, an image can contain more small objects than large ones, so an even distribution of objects across various scales could lead to an imbalance in the number of images with different object sizes. This raises the question of which factor is more significant. Moreover, natural images often include objects in significantly different scales, which raises concerns about the reasonableness of using the introduced ScalePerson for evaluating attack performance on other physical dynamics besides scale.\n3. Physical factors are not well aligned in data collection, which may lead to misleading experimental results and conclusions, as previous works have demonstrated that some physical dynamics can also be exploited to perform attacks.\n4. Physical attacks should be conducted in real-world scenarios, whereas the perturbations are applied in the digital domain in the experiments. How, then, do the results demonstrate physical attack performance?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "a)\tThis work is well organized and easy to follow. Its motivation is reasonable and provides a solid foundation for the proposed benchmark.\n\nb)\tThis work conducts thorough experiments across various attacks, detectors, and datasets to construct a fair benchmark for existing methods.\n\nc)\tThe quantitative analysis is detailed and uncovers weaknesses of existing datasets and methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a new person detection dataset, SCALEPERSON, for assessing existing physical adversarial attacking methods on the person detection tasks. It builds a standard benchmark and evaluation metrics to measure the performance of attacks under different settings, which is transparent and insightful for the future physical adversarial attacks works." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "i.\tMy main concern is the quality of the proposed dataset. How many unique persons are used in SCALEPERSON dataset? According to Fig 3, it seems like that the diversity of persons is low.\nii.\tThe AP performance is high, and ASR performance is low on the proposed dataset. Is it caused by the low difficulty and diversity of the proposed dataset? Except for T-SEA, the performance distinction of existing methods is lower on SCALEPERSON than on other datasets. Does it cause the proposed dataset not a qualified benchmark to evaluate these methods?\niii.\tMore statistical numbers of the proposed dataset should be provided, such as the gender ratio, occlusion levels, and ages." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Our paper introduces ScalePerson, a new dataset and benchmark for evaluating physical adversarial attacks in person detection, providing standardized metrics and comprehensive analyses across multiple attack methods and detectors." }, "_bibtex": { "value": "@misc{\nwei2024scaleperson,\ntitle={ScalePerson: Towards Good Practices in Evaluating Physical Adversarial Attacks on Person Detection},\nauthor={Hui Wei and Yuanwei Liu and Xuemei Jia and Baraa Al-Hassani and Manhuen Zhang and Joey Tianyi Zhou and Zheng Wang},\nyear={2024},\nurl={https://openreview.net/forum?id=3iGponpukH}\n}" }, "abstract": { "value": "Person detection is widely used in safety-critical tasks but is known to be vulnerable to physical adversarial attacks. Numerous pioneering attack methods have been proposed, each claiming superior performance and exposing potential security risks. However, assessing actual progress in this field is challenging due to two common limitations in existing evaluations. First, inconsistent experimental setups and ambiguous evaluation metrics hinder fair comparisons. Second, the absence of a dedicated dataset for this task has led to evaluations on datasets originally designed for object detection, which, while informative, are inadequate. To address these limitations, we present a comprehensive benchmark and introduce ScalePerson, the first dataset specifically designed for evaluating physical adversarial attacks in person detection. This dataset incorporates critical factors for this task, such as person scale, orientation, number of individuals, and capture devices. Our benchmark includes standardized evaluation metrics and a modular codebase to enhance reproducibility and transparency. Leveraging this benchmark, we conduct an extensive evaluation of 11 state-of-the-art attacks against 7 mainstream detectors across 3 datasets, totaling 231 experiments. We present detailed analyses from multiple perspectives, examining the impact of various factors on the efficacy of physical adversarial attacks in person detection. The source code and dataset will be made publicly available upon acceptance of this paper." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Hui_Wei2", "~Yuanwei_Liu4", "~Xuemei_Jia1", "~Baraa_Al-Hassani1", "~Manhuen_Zhang1", "~Joey_Tianyi_Zhou1", "~Zheng_Wang14" ] }, "authors": { "value": [ "Hui Wei", "Yuanwei Liu", "Xuemei Jia", "Baraa Al-Hassani", "Manhuen Zhang", "Joey Tianyi Zhou", "Zheng Wang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Physical Adversarial Attack", "Person Detection", "Dataset" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "wei|scaleperson_towards_good_practices_in_evaluating_physical_adversarial_attacks_on_person_detection" }, "pdf": { "value": "/pdf/b58537cd03499db9a94506d08343ddc0c0dfad9e.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ScalePerson: Towards Good Practices in Evaluating Physical Adversarial Attacks on Person Detection" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]