bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=wnPlJNiqfA | @inproceedings{
zhang2024kfnn,
title={{KFNN}: K-Free Nearest Neighbor For Crowdsourcing},
author={Wenjun Zhang and Liangxiao Jiang and Chaoqun Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wnPlJNiqfA}
} | To reduce annotation costs, it is common in crowdsourcing to collect only a few noisy labels from different crowd workers for each instance. However, the limited noisy labels restrict the performance of label integration algorithms in inferring the unknown true label for the instance. Recent works have shown that leveraging neighbor instances can help alleviate this problem. Yet, these works all assume that each instance has the same neighborhood size, which defies common sense. To address this gap, we propose a novel label integration algorithm called K-free nearest neighbor (KFNN). In KFNN, the neighborhood size of each instance is automatically determined based on its attributes and noisy labels. Specifically, KFNN initially estimates a Mahalanobis distance distribution from the attribute space to model the relationship between each instance and all classes. This distance distribution is then utilized to enhance the multiple noisy label distribution of each instance. Subsequently, a Kalman filter is designed to mitigate the impact of noise incurred by neighbor instances. Finally, KFNN determines the optimal neighborhood size by the max-margin learning. Extensive experimental results demonstrate that KFNN significantly outperforms all the other state-of-the-art algorithms and exhibits greater robustness in various crowdsourcing scenarios. | KFNN: K-Free Nearest Neighbor For Crowdsourcing | [
"Wenjun Zhang",
"Liangxiao Jiang",
"Chaoqun Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wm9JZq7RCe | @inproceedings{
rajaraman2024an,
title={An Analysis of Tokenization: Transformers under Markov Data},
author={Nived Rajaraman and Jiantao Jiao and Kannan Ramchandran},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wm9JZq7RCe}
} | While there has been a large body of research attempting to circumvent tokenization for language modeling (Clark et al. 2022, Xue et al. 2022), the current consensus is that it is a necessary initial step for designing state-of-the-art performant language models. In this paper, we investigate tokenization from a theoretical point of view by studying the behavior of transformers on simple data generating processes. When trained on data drawn from certain simple $k^{\text{th}}$-order Markov processes for $k > 1$, transformers exhibit a surprising phenomenon - in the absence of tokenization, they empirically are incredibly slow or fail to learn the right distribution and predict characters according to a unigram model (Makkuva et al. 2024). With the addition of tokenization, however, we empirically observe that transformers break through this barrier and are able to model the probabilities of sequences drawn from the source near-optimally, achieving small cross-entropy loss. With this observation as starting point, we study the end-to-end cross-entropy loss achieved by transformers with and without tokenization. With the appropriate tokenization, we show that even the simplest unigram models (over tokens) learnt by transformers are able to model the probability of sequences drawn from $k^{\text{th}}$-order Markov sources near optimally. Our analysis provides a justification for the use of tokenization in practice through studying the behavior of transformers on Markovian data. | An Analysis of Tokenization: Transformers under Markov Data | [
"Nived Rajaraman",
"Jiantao Jiao",
"Kannan Ramchandran"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=wlqfOvlTQz | @inproceedings{
merlis2024reinforcement,
title={Reinforcement Learning with Lookahead Information},
author={Nadav Merlis},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wlqfOvlTQz}
} | We study reinforcement learning (RL) problems in which agents observe the reward or transition realizations at their current state _before deciding which action to take_. Such observations are available in many applications, including transactions, navigation and more. When the environment is known, previous work shows that this lookahead information can drastically increase the collected reward. However, outside of specific applications, existing approaches for interacting with unknown environments are not well-adapted to these observations. In this work, we close this gap and design provably-efficient learning algorithms able to incorporate lookahead information. To achieve this, we perform planning using the empirical distribution of the reward and transition observations, in contrast to vanilla approaches that only rely on estimated expectations. We prove that our algorithms achieve tight regret versus a baseline that also has access to lookahead information -- linearly increasing the amount of collected reward compared to agents that cannot handle lookahead information. | Reinforcement Learning with Lookahead Information | [
"Nadav Merlis"
] | NeurIPS.cc/2024/Conference | 2406.02258 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wlcm21C4nk | @inproceedings{
yu2024advancing,
title={Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation},
author={Chengting Yu and Lei Liu and Gaoang Wang and Erping Li and Aili Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wlcm21C4nk}
} | Recent insights have revealed that rate-coding is a primary form of information representation captured by surrogate-gradient-based Backpropagation Through Time (BPTT) in training deep Spiking Neural Networks (SNNs). Motivated by these findings, we propose rate-based backpropagation, a training strategy specifically designed to exploit rate-based representations to reduce the complexity of BPTT. Our method minimizes reliance on detailed temporal derivatives by focusing on averaged dynamics, streamlining the computational graph to reduce memory and computational demands of SNNs training. We substantiate the rationality of the gradient approximation between BPTT and the proposed method through both theoretical analysis and empirical observations. Comprehensive experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS validate that our method achieves comparable performance to BPTT counterparts, and surpasses state-of-the-art efficient training techniques. By leveraging the inherent benefits of rate-coding, this work sets the stage for more scalable and efficient SNNs training within resource-constrained environments. | Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation | [
"Chengting Yu",
"Lei Liu",
"Gaoang Wang",
"Erping Li",
"Aili Wang"
] | NeurIPS.cc/2024/Conference | 2410.11488 | [
"https://github.com/tab-ct/rate-based-backpropagation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wlLjYl0Gi6 | @inproceedings{
fu2024efficient,
title={Efficient {LLM} Scheduling by Learning to Rank},
author={Yichao Fu and Siqi Zhu and Runlong Su and Aurick Qiao and Ion Stoica and Hao Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wlLjYl0Gi6}
} | In Large Language Model (LLM) inference, the output length of an LLM request is typically regarded as not known a priori. Consequently, most LLM serving systems employ a simple First-come-first-serve (FCFS) scheduling strategy, leading to Head-Of-Line (HOL) blocking and reduced throughput and service quality.
In this paper, we reexamine this assumption -- we show that, although predicting the exact generation length of each request is infeasible, it is possible to predict the relative ranks of output lengths in a batch of requests, using learning to rank. The ranking information offers valuable guidance for scheduling requests. Building on this insight, we develop a novel scheduler for LLM inference and serving that can approximate the shortest-job-first (SJF) schedule better than existing approaches. We integrate this scheduler with the state-of-the-art LLM serving system and show significant performance improvement in several important applications: 2.8x lower latency in chatbot serving and 6.5x higher throughput in synthetic data generation. Our code is available at https://github.com/hao-ai-lab/vllm-ltr.git | Efficient LLM Scheduling by Learning to Rank | [
"Yichao Fu",
"Siqi Zhu",
"Runlong Su",
"Aurick Qiao",
"Ion Stoica",
"Hao Zhang"
] | NeurIPS.cc/2024/Conference | 2408.15792 | [
"https://github.com/hao-ai-lab/vllm-ltr"
] | https://huggingface.co/papers/2408.15792 | 2 | 19 | 2 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=wl44W8xpc7 | @inproceedings{
ko2024learning,
title={Learning Infinitesimal Generators of Continuous Symmetries from Data},
author={Gyeonghoon Ko and Hyunsu Kim and Juho Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wl44W8xpc7}
} | Exploiting symmetry inherent in data can significantly improve the sample efficiency of a learning procedure and the generalization of learned models. When data clearly reveals underlying symmetry, leveraging this symmetry can naturally inform the design of model architectures or learning strategies. Yet, in numerous real-world scenarios, identifying the specific symmetry within a given data distribution often proves ambiguous. To tackle this, some existing works learn symmetry in a data-driven manner, parameterizing and learning expected symmetry through data. However, these methods often rely on explicit knowledge, such as pre-defined Lie groups, which are typically restricted to linear or affine transformations. In this paper, we propose a novel symmetry learning algorithm based on transformations defined with one-parameter groups, continuously parameterized transformations flowing along the directions of vector fields called infinitesimal generators. Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators. To learn these symmetries, we introduce a notion of a validity score that examine whether the transformed data is still valid for the given task. The validity score is designed to be fully differentiable and easily computable, enabling effective searches for transformations that achieve symmetries innate to the data. We apply our method mainly in two domains: image data and partial differential equations, and demonstrate its advantages. Our codes are available at \url{https://github.com/kogyeonghoon/learning-symmetry-from-scratch.git}. | Learning Infinitesimal Generators of Continuous Symmetries from Data | [
"Gyeonghoon Ko",
"Hyunsu Kim",
"Juho Lee"
] | NeurIPS.cc/2024/Conference | 2410.21853 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wkwGedn19x | @inproceedings{
yang2024scaling,
title={Scaling White-Box Transformers for Vision},
author={Jinrui Yang and Xianhang Li and Druv Pai and Yuyin Zhou and Yi Ma and Yaodong Yu and Cihang Xie},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wkwGedn19x}
} | CRATE, a white-box transformer architecture designed to learn compressed and sparse representations, offers an intriguing alternative to standard vision transformers (ViTs) due to its inherent mathematical interpretability. Despite extensive investigations into the scaling behaviors of language and vision transformers, the scalability of CRATE remains an open question which this paper aims to address.
Specifically, we propose CRATE-$\alpha$, featuring strategic yet minimal modifications to the sparse coding block in the CRATE architecture design, and a light training recipe designed to improve the scalability of CRATE.
Through extensive experiments, we demonstrate that CRATE-$\alpha$ can effectively scale with larger model sizes and datasets.
For example, our CRATE-$\alpha$-B substantially outperforms the prior best CRATE-B model accuracy on ImageNet classification by 3.7%, achieving an accuracy of 83.2%. Meanwhile, when scaling further, our CRATE-$\alpha$-L obtains an ImageNet classification accuracy of 85.1%. More notably, these model performance improvements are achieved while preserving, and potentially even enhancing the interpretability of learned CRATE models, as we demonstrate through showing that the learned token representations of increasingly larger trained CRATE-$\alpha$ models yield increasingly higher-quality unsupervised object segmentation of images. | Scaling White-Box Transformers for Vision | [
"Jinrui Yang",
"Xianhang Li",
"Druv Pai",
"Yuyin Zhou",
"Yi Ma",
"Yaodong Yu",
"Cihang Xie"
] | NeurIPS.cc/2024/Conference | 2405.20299 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wjbTHLUSzU | @inproceedings{
liu2024tsds,
title={{TSDS}: Data Selection for Task-Specific Model Finetuning},
author={Zifan Liu and Amin Karbasi and Theodoros Rekatsinas},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wjbTHLUSzU}
} | Finetuning foundation models for specific tasks is an emerging paradigm in modern machine learning. The efficacy of task-specific finetuning largely depends on the selection of appropriate training data. We present TSDS (Task-Specific Data Selection), a framework to select data for task-specific model finetuning, guided by a small but representative set of examples from the target task. To do so, we formulate data selection for task-specific finetuning as an optimization problem with a distribution alignment loss based on optimal transport to capture the discrepancy between the selected data and the target distribution. In addition, we add a regularizer to encourage the diversity of the selected data and incorporate kernel density estimation into the regularizer to reduce the negative effects of near-duplicates among the candidate data.
We connect our optimization problem to nearest neighbor search and design efficient algorithms to compute the optimal solution based on approximate nearest neighbor search techniques.
We evaluate our method on data selection for both continued pretraining and instruction tuning of language models.
We show that instruction tuning using data selected by our method with a 1\% selection ratio often outperforms using the full dataset and beats the baseline selection methods by 1.5 points in F1 score on average. | TSDS: Data Selection for Task-Specific Model Finetuning | [
"Zifan Liu",
"Amin Karbasi",
"Theodoros Rekatsinas"
] | NeurIPS.cc/2024/Conference | 2410.11303 | [
"https://github.com/zifanl/tsds"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wiMaws0FWB | @inproceedings{
pesme2024implicit,
title={Implicit Bias of Mirror Flow on Separable Data},
author={Scott Pesme and Radu-Alexandru Dragomir and Nicolas Flammarion},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wiMaws0FWB}
} | We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised ‘at infinity’ and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a $\phi_\infty$-maximum margin classifier. The function $\phi_\infty$ is the horizon function of the mirror potential and characterises its shape ‘at infinity’. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results. | Implicit Bias of Mirror Flow on Separable Data | [
"Scott Pesme",
"Radu-Alexandru Dragomir",
"Nicolas Flammarion"
] | NeurIPS.cc/2024/Conference | 2406.12763 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wiK6bwuxjE | @inproceedings{
jiang2024monomae,
title={Mono{MAE}: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders},
author={Xueying Jiang and Sheng Jin and Xiaoqin Zhang and Ling Shao and Shijian Lu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wiK6bwuxjE}
} | Monocular 3D object detection aims for precise 3D localization and identification of objects from a single-view image. Despite its recent progress, it often struggles while handling pervasive object occlusions that tend to complicate and degrade the prediction of object dimensions, depths, and orientations. We design MonoMAE, a monocular 3D detector inspired by Masked Autoencoders that addresses the object occlusion issue by masking and reconstructing objects in the feature space. MonoMAE consists of two novel designs. The first is depth-aware masking that selectively masks certain parts of non-occluded object queries in the feature space for simulating occluded object queries for network training. It masks non-occluded object queries by balancing the masked and preserved query portions adaptively according to the depth information. The second is lightweight query completion that works with the depth-aware masking to learn to reconstruct and complete the masked object queries. With the proposed feature-space occlusion and completion, MonoMAE learns enriched 3D representations that achieve superior monocular 3D detection performance qualitatively and quantitatively for both occluded and non-occluded objects. Additionally, MonoMAE learns generalizable representations that can work well in new domains. | MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders | [
"Xueying Jiang",
"Sheng Jin",
"Xiaoqin Zhang",
"Ling Shao",
"Shijian Lu"
] | NeurIPS.cc/2024/Conference | 2405.07696 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wiEHZSV15I | @inproceedings{
deng2024parsimony,
title={Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting},
author={Jinliang Deng and Feiyang Ye and Du Yin and Xuan Song and Ivor Tsang and Hui Xiong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wiEHZSV15I}
} | Long-term time series forecasting (LTSF) represents a critical frontier in time series analysis, characterized by extensive input sequences, as opposed to the shorter spans typical of traditional approaches. While longer sequences inherently offer richer information for enhanced predictive precision, prevailing studies often respond by escalating model complexity. These intricate models can inflate into millions of parameters, resulting in prohibitive parameter scales. Our study demonstrates, through both theoretical and empirical evidence, that decomposition is key to containing excessive model inflation while achieving uniformly superior and robust results across various datasets. Remarkably, by tailoring decomposition to the intrinsic dynamics of time series data, our proposed model outperforms existing benchmarks, using over 99\% fewer parameters than the majority of competing methods. Through this work, we aim to unleash the power of a restricted set of parameters by capitalizing on domain characteristics—a timely reminder that in the realm of LTSF, bigger is not invariably better. The code is available at \url{https://anonymous.4open.science/r/SSCNN-321D/}. | Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting | [
"Jinliang Deng",
"Feiyang Ye",
"Du Yin",
"Xuan Song",
"Ivor Tsang",
"Hui Xiong"
] | NeurIPS.cc/2024/Conference | 2401.11929 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=wgpmDyJgsg | @inproceedings{
zhao2024sparseview,
title={Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis},
author={Qitao Zhao and Shubham Tulsiani},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wgpmDyJgsg}
} | Inferring the 3D structure underlying a set of multi-view images typically requires solving two co-dependent tasks -- accurate 3D reconstruction requires precise camera poses, and predicting camera poses relies on (implicitly or explicitly) modeling the underlying 3D. The classical framework of analysis by synthesis casts this inference as a joint optimization seeking to explain the observed pixels, and recent instantiations learn expressive 3D representations (e.g., Neural Fields) with gradient-descent-based pose refinement of initial pose estimates. However, given a sparse set of observed views, the observations may not provide sufficient direct evidence to obtain complete and accurate 3D. Moreover, large errors in pose estimation may not be easily corrected and can further degrade the inferred 3D. To allow robust 3D reconstruction and pose estimation in this challenging setup, we propose SparseAGS, a method that adapts this analysis-by-synthesis approach by: a) including novel-view-synthesis-based generative priors in conjunction with photometric objectives to improve the quality of the inferred 3D, and b) explicitly reasoning about outliers and using a discrete search with a continuous optimization-based strategy to correct them. We validate our framework across real-world and synthetic datasets in combination with several off-the-shelf pose estimation systems as initialization. We find that it significantly improves the base systems' pose accuracy while yielding high-quality 3D reconstructions that outperform the results from current multi-view reconstruction baselines. | Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis | [
"Qitao Zhao",
"Shubham Tulsiani"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wfU2CdgmWt | @inproceedings{
domingo-enrich2024stochastic,
title={Stochastic Optimal Control Matching},
author={Carles Domingo-Enrich and Jiequn Han and Brandon Amos and Joan Bruna and Ricky T. Q. Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wfU2CdgmWt}
} | Stochastic optimal control, which has the goal of driving the behavior of noisy systems, is broadly applicable in science, engineering and artificial intelligence. Our work introduces Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models. That is, the control is learned via a least squares problem by trying to fit a matching vector field. The training loss, which is closely connected to the cross-entropy loss, is optimized with respect to both the control function and a family of reparameterization matrices which appear in the matching vector field. The optimization with respect to the reparameterization matrices aims at minimizing the variance of the matching vector field. Experimentally, our algorithm achieves lower error than all the existing IDO techniques for stochastic optimal control for three out of four control problems, in some cases by an order of magnitude. The key idea underlying SOCM is the path-wise reparameterization trick, a novel technique that may be of independent interest. | Stochastic Optimal Control Matching | [
"Carles Domingo-Enrich",
"Jiequn Han",
"Brandon Amos",
"Joan Bruna",
"Ricky T. Q. Chen"
] | NeurIPS.cc/2024/Conference | 2312.02027 | [
"https://github.com/facebookresearch/soc-matching"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=weemASPtzg | @inproceedings{
var{\i}c{\i}2024linear,
title={Linear Causal Representation Learning from Unknown Multi-node Interventions},
author={Burak Var{\i}c{\i} and Emre Acart{\"u}rk and Karthikeyan Shanmugam and Ali Tajer},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=weemASPtzg}
} | Despite the multifaceted recent advances in interventional causal representation learning (CRL), they primarily focus on the stylized assumption of single-node interventions. This assumption is not valid in a wide range of applications, and generally, the subset of nodes intervened in an interventional environment is *fully unknown*. This paper focuses on interventional CRL under unknown multi-node (UMN) interventional environments and establishes the first identifiability results for *general* latent causal models (parametric or nonparametric) under stochastic interventions (soft or hard) and linear transformation from the latent to observed space. Specifically, it is established that given sufficiently diverse interventional environments, (i) identifiability *up to ancestors* is possible using only *soft* interventions, and (ii) *perfect* identifiability is possible using *hard* interventions. Remarkably, these guarantees match the best-known results for more restrictive single-node interventions. Furthermore, CRL algorithms are also provided that achieve the identifiability guarantees. A central step in designing these algorithms is establishing the relationships between UMN interventional CRL and score functions associated with the statistical models of different interventional environments. Establishing these relationships also serves as constructive proof of the identifiability guarantees. | Linear Causal Representation Learning from Unknown Multi-node Interventions | [
"Burak Varıcı",
"Emre Acartürk",
"Karthikeyan Shanmugam",
"Ali Tajer"
] | NeurIPS.cc/2024/Conference | 2406.05937 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wduRaBDRBS | @inproceedings{
lee2024video,
title={Video Token Merging for Long Video Understanding},
author={Seon-Ho Lee and Jue Wang and Zhikang Zhang and David Fan and Xinyu Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wduRaBDRBS}
} | As the scale of data and models for video understanding rapidly expand, handling long-form video input in transformer-based models presents a practical challenge. Rather than resorting to input sampling or token dropping, which may result in information loss, token merging shows promising results when used in collaboration with transformers. However, the application of token merging for long-form video processing is not trivial. We begin with the premise that token merging should not rely solely on the similarity of video tokens; the saliency of tokens should also be considered. To address this, we explore various video token merging strategies for long-form video classification, starting with a simple extension of image token merging, moving to region-concentrated merging, and finally proposing a learnable video token merging (VTM) algorithm that dynamically merges tokens based on their saliency. Extensive experimental results show that we achieve better or comparable performances on the LVU, COIN, and Breakfast datasets. Moreover, our approach significantly reduces memory costs by 84% and boosts throughput by approximately 6.89 times compared to baseline algorithms. | Video Token Merging for Long Video Understanding | [
"Seon-Ho Lee",
"Jue Wang",
"Zhikang Zhang",
"David Fan",
"Xinyu Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wdGvRud1LS | @inproceedings{
ma2024learning,
title={Learning Cortico-Muscular Dependence through Orthonormal Decomposition of Density Ratios},
author={Shihan Ma and Bo Hu and Tianyu Jia and Alexander Kenneth Clarke and Blanka Zicher and Arnault H. Caillet and Dario Farina and Jose C Principe},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wdGvRud1LS}
} | The cortico-spinal neural pathway is fundamental for motor control and movement execution, and in humans it is typically studied using concurrent electroencephalography (EEG) and electromyography (EMG) recordings. However, current approaches for capturing high-level and contextual connectivity between these recordings have important limitations. Here, we present a novel application of statistical dependence estimators based on orthonormal decomposition of density ratios to model the relationship between cortical and muscle oscillations. Our method extends from traditional scalar-valued measures by learning eigenvalues, eigenfunctions, and projection spaces of density ratios from realizations of the signal, addressing the interpretability, scalability, and local temporal dependence of cortico-muscular connectivity. We experimentally demonstrate that eigenfunctions learned from cortico-muscular connectivity can accurately classify movements and subjects. Moreover, they reveal channel and temporal dependencies that confirm the activation of specific EEG channels during movement. | Learning Cortico-Muscular Dependence through Orthonormal Decomposition of Density Ratios | [
"Shihan Ma",
"Bo Hu",
"Tianyu Jia",
"Alexander Kenneth Clarke",
"Blanka Zicher",
"Arnault H. Caillet",
"Dario Farina",
"Jose C Principe"
] | NeurIPS.cc/2024/Conference | 2410.14697 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wcxHbAY8B3 | @inproceedings{
huang2024gaussianmarker,
title={GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting},
author={Xiufeng Huang and Ruiqi Li and Yiu-ming Cheung and Ka Chun Cheung and Simon See and Renjie Wan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wcxHbAY8B3}
} | 3D Gaussian Splatting (3DGS) has become a crucial method for acquiring 3D assets. To protect the copyright of these assets, digital watermarking techniques can be applied to embed ownership information discreetly within 3DGS mod- els. However, existing watermarking methods for meshes, point clouds, and implicit radiance fields cannot be directly applied to 3DGS models, as 3DGS models use explicit 3D Gaussians with distinct structures and do not rely on neural networks. Naively embedding the watermark on a pre-trained 3DGS can cause obvious distortion in rendered images. In our work, we propose an uncertainty- based method that constrains the perturbation of model parameters to achieve invisible watermarking for 3DGS. At the message decoding stage, the copyright messages can be reliably extracted from both 3D Gaussians and 2D rendered im- ages even under various forms of 3D and 2D distortions. We conduct extensive experiments on the Blender, LLFF, and MipNeRF-360 datasets to validate the effectiveness of our proposed method, demonstrating state-of-the-art performance on both message decoding accuracy and view synthesis quality. | GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting | [
"Xiufeng Huang",
"Ruiqi Li",
"Yiu-ming Cheung",
"Ka Chun Cheung",
"Simon See",
"Renjie Wan"
] | NeurIPS.cc/2024/Conference | 2410.23718 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wcX04Wn34u | @inproceedings{
lao2024lit,
title={LiT: Unifying Li{DAR} ''Languages'' with Li{DAR} Translator},
author={Yixing Lao and Tao Tang and Xiaoyang Wu and Peng Chen and Kaicheng Yu and Hengshuang Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wcX04Wn34u}
} | LiDAR data exhibits significant domain gaps due to variations in sensors, vehicles, and driving environments, creating “language barriers” that limit the effective use of data across domains and the scalability of LiDAR perception models. To address these challenges, we introduce the LiDAR Translator (LiT), a framework that directly translates LiDAR data across domains, enabling both cross-domain adaptation and multi-domain joint learning. LiT integrates three key components: a scene modeling module for precise foreground and background reconstruction, a LiDAR modeling module that models LiDAR rays statistically and simulates ray-drop, and a fast, hardware-accelerated ray casting engine. LiT enables state-of-the-art zero-shot and unified domain detection across diverse LiDAR datasets, marking a step toward data-driven domain unification for autonomous driving systems. Source code and demos are available at: https://yxlao.github.io/lit. | LiT: Unifying LiDAR "Languages" with LiDAR Translator | [
"Yixing Lao",
"Tao Tang",
"Xiaoyang Wu",
"Peng Chen",
"Kaicheng Yu",
"Hengshuang Zhao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wblxm5zdkE | @inproceedings{
huo2024realtime,
title={Real-Time Selection Under General Constraints via Predictive Inference},
author={Yuyang Huo and Lin Lu and Haojie Ren and Changliang Zou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wblxm5zdkE}
} | Real-time decision-making gets more attention in the big data era. Here, we consider the problem of sample selection in the online setting, where one encounters a possibly infinite sequence of individuals collected over time with covariate information available. The goal is to select samples of interest that are characterized by their unobserved responses until the user-specified stopping time. We derive a new decision rule that enables us to find more preferable samples that meet practical requirements by simultaneously controlling two types of general constraints: individual and interactive constraints, which include the widely utilized False Selection Rate (FSR), cost limitations, and diversity of selected samples. The key elements of our approach involve quantifying the uncertainty of response predictions via predictive inference and addressing individual and interactive constraints in a sequential manner. Theoretical and numerical results demonstrate the effectiveness of the proposed method in controlling both individual and interactive constraints. | Real-Time Selection Under General Constraints via Predictive Inference | [
"Yuyang Huo",
"Lin Lu",
"Haojie Ren",
"Changliang Zou"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wbE0QCBWji | @inproceedings{
zhang2024constructing,
title={Constructing Semantics-Aware Adversarial Examples with Probabilistic Perspective},
author={Andi Zhang and Mingtian Zhang and Damon Wischik},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wbE0QCBWji}
} | We propose a probabilistic perspective on adversarial examples, allowing us to embed subjective understanding of semantics as a distribution into the process of generating adversarial examples, in a principled manner. Despite significant pixel-level modifications compared to traditional adversarial attacks, our method preserves the overall semantics of the image, making the changes difficult for humans to detect. This extensive pixel-level modification enhances our method's ability to deceive classifiers designed to defend against adversarial attacks. Our empirical findings indicate that the proposed methods achieve higher success rates in circumventing adversarial defense mechanisms, while remaining difficult for human observers to detect. | Constructing Semantics-Aware Adversarial Examples with Probabilistic Perspective | [
"Andi Zhang",
"Mingtian Zhang",
"Damon Wischik"
] | NeurIPS.cc/2024/Conference | 2306.00353 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=waQ5X4qc3W | @inproceedings{
zhu2024stabilize,
title={Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective},
author={Yongxin Zhu and Bocheng Li and Hang Zhang and Xin Li and Linli Xu and Lidong Bing},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=waQ5X4qc3W}
} | Latent-based image generative models, such as Latent Diffusion Models (LDMs) and Mask Image Models (MIMs), have achieved notable success in image generation tasks. These models typically leverage reconstructive autoencoders like VQGAN or VAE to encode pixels into a more compact latent space and learn the data distribution in the latent space instead of directly from pixels. However, this practice raises a pertinent question: Is it truly the optimal choice? In response, we begin with an intriguing observation: despite sharing the same latent space, autoregressive models significantly lag behind LDMs and MIMs in image generation. This finding contrasts sharply with the field of NLP, where the autoregressive model GPT has established a commanding presence. To address this discrepancy, we introduce a unified perspective on the relationship between latent space and generative models, emphasizing the stability of latent space in image generative modeling. Furthermore, we propose a simple but effective discrete image tokenizer to stabilize the latent space for image generative modeling by applying K-Means on the latent features of self-supervised learning models. Experimental results show that image autoregressive modeling with our tokenizer (DiGIT) benefits both image understanding and image generation with the next token prediction principle, which is inherently straightforward for GPT models but challenging for other generative models. Remarkably, for the first time, a GPT-style autoregressive model for images outperforms LDMs, which also exhibits substantial improvement akin to GPT when scaling up model size. Our findings underscore the potential of an optimized latent space and the integration of discrete tokenization in advancing the capabilities of image generative models. The code is available at \url{https://github.com/DAMO-NLP-SG/DiGIT}. | Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective | [
"Yongxin Zhu",
"Bocheng Li",
"Hang Zhang",
"Xin Li",
"Linli Xu",
"Lidong Bing"
] | NeurIPS.cc/2024/Conference | 2410.12490 | [
"https://github.com/DAMO-NLP-SG/DiGIT"
] | https://huggingface.co/papers/2410.12490 | 3 | 8 | 2 | 6 | [
"DAMO-NLP-SG/DiGIT"
] | [] | [] | [
"DAMO-NLP-SG/DiGIT"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=wZigMVFURk | @inproceedings{
wu2024ropinn,
title={Ro{PINN}: Region Optimized Physics-Informed Neural Networks},
author={Haixu Wu and Huakun Luo and Yuezhou Ma and Jianmin Wang and Mingsheng Long},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wZigMVFURk}
} | Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs) by enforcing outputs and gradients of deep models to satisfy target equations. Due to the limitation of numerical computation, PINNs are conventionally optimized on finite selected points. However, since PDEs are usually defined on continuous domains, solely optimizing models on scattered points may be insufficient to obtain an accurate solution for the whole domain. To mitigate this inherent deficiency of the default scatter-point optimization, this paper proposes and theoretically studies a new training paradigm as region optimization. Concretely, we propose to extend the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm, which is implemented by a straightforward but effective Monte Carlo sampling method. By calibrating the sampling process into trust regions, RoPINN finely balances optimization and generalization error. Experimentally, RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation. Code is available at this repository: https://github.com/thuml/RoPINN. | RoPINN: Region Optimized Physics-Informed Neural Networks | [
"Haixu Wu",
"Huakun Luo",
"Yuezhou Ma",
"Jianmin Wang",
"Mingsheng Long"
] | NeurIPS.cc/2024/Conference | 2405.14369 | [
"https://github.com/thuml/ropinn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wZgw4CrxwK | @inproceedings{
saig2024incentivizing,
title={Incentivizing Quality Text Generation via Statistical Contracts},
author={Eden Saig and Ohad Einav and Inbal Talgam-Cohen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wZgw4CrxwK}
} | While the success of large language models (LLMs) increases demand for machine-generated text, current pay-per-token pricing schemes create a misalignment of incentives known in economics as moral hazard: Text-generating agents have strong incentive to cut costs by preferring a cheaper model over the cutting-edge one, and this can be done “behind the scenes” since the agent performs inference internally. In this work, we approach this issue from an economic perspective, by proposing a pay-for-performance, contract-based framework for incentivizing quality. We study a principal-agent game where the agent generates text using costly inference, and the contract determines the principal’s payment for the text according to an automated quality evaluation. Since standard contract theory is inapplicable when internal inference costs are unknown, we introduce cost-robust contracts. As our main theoretical contribution, we characterize optimal cost-robust contracts through a direct correspondence to optimal composite hypothesis tests from statistics, generalizing a result of Saig et al. (NeurIPS’23). We evaluate our framework empirically by deriving contracts for a range of objectives and LLM evaluation benchmarks, and find that cost-robust contracts sacrifice only a marginal increase in objective value compared to their cost-aware counterparts. | Incentivizing Quality Text Generation via Statistical Contracts | [
"Eden Saig",
"Ohad Einav",
"Inbal Talgam-Cohen"
] | NeurIPS.cc/2024/Conference | 2406.11118 | [
"https://github.com/edensaig/llm-contracts"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wWyumwEYV8 | @inproceedings{
wang2024a,
title={A Sober Look at the Robustness of {CLIP}s to Spurious Features},
author={Qizhou Wang and Yong Lin and Yongqiang Chen and Ludwig Schmidt and Bo Han and Tong Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wWyumwEYV8}
} | Large vision language models, such as CLIP, demonstrate impressive robustness to spurious features than single-modal models trained on ImageNet. However, existing test datasets are typically curated based on ImageNet-trained models, which aim to capture the spurious features inherited in ImageNet. Benchmarking CLIP models based on the ImageNet-oriented spurious features may not be sufficient to reflect the extent to which CLIP models are robust to spurious correlations within CLIP training data, e.g., LAION. To this end, we craft a new challenging dataset named CounterAnimal designed to reveal the reliance of CLIP models on realistic spurious features. Specifically, we split animal photos into groups according to the backgrounds, and then identify a pair of groups for each class where a CLIP model shows high-performance drops across the two groups. Our evaluations show that the spurious features captured by CounterAnimal are generically learned by CLIP models with different backbones and pre-train data, yet have limited influence for ImageNet models. We provide theoretical insights that the CLIP objective cannot offer additional robustness. Furthermore, we also re-evaluate strategies such as scaling up parameters and high-quality pre-trained data. We find that they still help mitigate the spurious features, providing a promising path for future developments. | A Sober Look at the Robustness of CLIPs to Spurious Features | [
"Qizhou Wang",
"Yong Lin",
"Yongqiang Chen",
"Ludwig Schmidt",
"Bo Han",
"Tong Zhang"
] | NeurIPS.cc/2024/Conference | 2403.11497 | [
""
] | https://huggingface.co/papers/2403.11497 | 0 | 0 | 0 | 6 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=wWiAR5mqXq | @inproceedings{
bo2024reflective,
title={Reflective Multi-Agent Collaboration based on Large Language Models},
author={Xiaohe Bo and Zeyu Zhang and Quanyu Dai and Xueyang Feng and Lei Wang and Rui Li and Xu Chen and Ji-Rong Wen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wWiAR5mqXq}
} | Benefiting from the powerful language expression and planning capabilities of Large Language Models (LLMs), LLM-based autonomous agents have achieved promising performance in various downstream tasks. Recently, based on the development of single-agent systems, researchers propose to construct LLM-based multi-agent systems to tackle more complicated tasks. In this paper, we propose a novel framework, named COPPER, to enhance the collaborative capabilities of LLM-based agents with the self-reflection mechanism. To improve the quality of reflections, we propose to fine-tune a shared reflector, which automatically tunes the prompts of actor models using our counterfactual PPO mechanism. On the one hand, we propose counterfactual rewards to assess the contribution of a single agent’s reflection within the system, alleviating the credit assignment problem. On the other hand, we propose to train a shared reflector, which enables the reflector to generate personalized reflections according to agent roles, while reducing the computational resource requirements and improving training stability. We conduct experiments on three datasets to evaluate the performance of our model in multi-hop question answering, mathematics, and chess scenarios. Experimental results show that COPPER possesses stronger reflection capabilities and exhibits excellent generalization performance across different actor models. | Reflective Multi-Agent Collaboration based on Large Language Models | [
"Xiaohe Bo",
"Zeyu Zhang",
"Quanyu Dai",
"Xueyang Feng",
"Lei Wang",
"Rui Li",
"Xu Chen",
"Ji-Rong Wen"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wWguwYhpAY | @inproceedings{
ben-shabat2024neural,
title={Neural Experts: Mixture of Experts for Implicit Neural Representations},
author={Yizhak Ben-Shabat and Chamin P Hewa Koneputugodage and Sameera Ramasinghe and Stephen Gould},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wWguwYhpAY}
} | Implicit neural representations (INRs) have proven effective in various tasks including image, shape, audio, and video reconstruction. These INRs typically learn the implicit field from sampled input points. This is often done using a single network for the entire domain, imposing many global constraints on a single function.
In this paper, we propose a mixture of experts (MoE) implicit neural representation approach that enables learning local piece-wise continuous functions that simultaneously learns to subdivide the domain and fit it locally.
We show that incorporating a mixture of experts architecture into existing INR formulations provides a boost in speed, accuracy, and memory requirements. Additionally, we introduce novel conditioning and pretraining methods for the gating network that improves convergence to the desired solution.
We evaluate the effectiveness of our approach on multiple reconstruction tasks, including surface reconstruction, image reconstruction, and audio signal reconstruction and show improved performance compared to non-MoE methods. Code is available at our project page https://sitzikbs.github.io/neural-experts-projectpage/ . | Neural Experts: Mixture of Experts for Implicit Neural Representations | [
"Yizhak Ben-Shabat",
"Chamin P Hewa Koneputugodage",
"Sameera Ramasinghe",
"Stephen Gould"
] | NeurIPS.cc/2024/Conference | 2410.21643 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wTIzpqX121 | @inproceedings{
oskarsson2024probabilistic,
title={Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks},
author={Joel Oskarsson and Tomas Landelius and Marc Peter Deisenroth and Fredrik Lindsten},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wTIzpqX121}
} | In recent years, machine learning has established itself as a powerful tool for high-resolution weather forecasting. While most current machine learning models focus on deterministic forecasts, accurately capturing the uncertainty in the chaotic weather system calls for probabilistic modeling. We propose a probabilistic weather forecasting model called Graph-EFM, combining a flexible latent-variable formulation with the successful graph-based forecasting framework. The use of a hierarchical graph construction allows for efficient sampling of spatially coherent forecasts. Requiring only a single forward pass per time step, Graph-EFM allows for fast generation of arbitrarily large ensembles. We experiment with the model on both global and limited area forecasting. Ensemble forecasts from Graph-EFM achieve equivalent or lower errors than comparable deterministic models, with the added benefit of accurately capturing forecast uncertainty. | Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks | [
"Joel Oskarsson",
"Tomas Landelius",
"Marc Peter Deisenroth",
"Fredrik Lindsten"
] | NeurIPS.cc/2024/Conference | 2406.04759 | [
"https://github.com/mllam/neural-lam"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=wT6GHk5ShC | @inproceedings{
yao2024enhancing,
title={Enhancing In-Context Learning Performance with just {SVD}-Based Weight Pruning: A Theoretical Perspective},
author={Xinhao Yao and Xiaolin Hu and Shenzhi Yang and Yong Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wT6GHk5ShC}
} | Pre-trained large language models (LLMs) based on Transformer have demonstrated striking in-context learning (ICL) abilities. With a few demonstration input-label pairs, they can predict the label for an unseen input without any parameter updates. In this paper, we show an exciting phenomenon that SVD-based weight pruning can enhance ICL performance, and more surprising, pruning weights in deep layers often results in more stable performance improvements than in shallow layers. However, the underlying mechanism of those findings still remains an open question. To reveal those findings, we conduct an in-depth theoretical analysis by presenting the implicit gradient descent (GD) trajectories of ICL and giving the mutual information based generalization bounds of ICL via full implicit GD trajectories. This helps us reasonably explain the surprising experimental findings. Besides, based on all our experimental and theoretical insights, we intuitively propose a simple, model-compression and derivative-free algorithm for downstream tasks in enhancing ICL inference. Experiments on benchmark datasets and open source LLMs display the method effectiveness. | Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical Perspective | [
"Xinhao Yao",
"Xiaolin Hu",
"Shenzhi Yang",
"Yong Liu"
] | NeurIPS.cc/2024/Conference | 2406.03768 | [
"https://github.com/chen123ctrls/enhancingicl_svdpruning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wT5AgMVkaJ | @inproceedings{
zhang2024aligning,
title={Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms},
author={Miaosen Zhang and Yixuan Wei and Zhen Xing and Yifei Ma and Zuxuan Wu and Ji Li and Zheng Zhang and Qi Dai and Chong Luo and Xin Geng and Baining Guo},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wT5AgMVkaJ}
} | Modern vision models are trained on very large noisy datasets. While these models acquire strong capabilities, they may not follow the user's intent to output the desired results in certain aspects, e.g., visual aesthetic, preferred style, and responsibility. In this paper, we target the realm of visual aesthetics and aim to align vision models with human aesthetic standards in a retrieval system. Advanced retrieval systems usually adopt a cascade of aesthetic models as re-rankers or filters, which are limited to low-level features like saturation and perform poorly when stylistic, cultural or knowledge contexts are involved. We find that utilizing the reasoning ability of large language models (LLMs) to rephrase the search query and extend the aesthetic expectations can make up for this shortcoming. Based on the above findings, we propose a preference-based reinforcement learning method that fine-tunes the vision models to distill the knowledge from both LLMs reasoning and the aesthetic models to better align the vision models with human aesthetics. Meanwhile, with rare benchmarks designed for evaluating retrieval systems, we leverage large multi-modality model (LMM) to evaluate the aesthetic performance with their strong abilities. As aesthetic assessment is one of the most subjective tasks, to validate the robustness of LMM, we further propose a novel dataset named HPIR to benchmark the alignment with human aesthetics. Experiments demonstrate that our method significantly enhances the aesthetic behaviors of the vision models, under several metrics. We believe the proposed algorithm can be a general practice for aligning vision models with human values. | Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms | [
"Miaosen Zhang",
"Yixuan Wei",
"Zhen Xing",
"Yifei Ma",
"Zuxuan Wu",
"Ji Li",
"Zheng Zhang",
"Qi Dai",
"Chong Luo",
"Xin Geng",
"Baining Guo"
] | NeurIPS.cc/2024/Conference | 2406.09397 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wT2TIfHKp8 | @inproceedings{
xu2024taming,
title={Taming the Long Tail in Human Mobility Prediction},
author={Xiaohang Xu and Renhe Jiang and Chuang Yang and zipei fan and Kaoru Sezaki},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wT2TIfHKp8}
} | With the popularity of location-based services, human mobility prediction plays a key role in enhancing personalized navigation, optimizing recommendation systems, and facilitating urban mobility and planning. This involves predicting a user's next POI (point-of-interest) visit using their past visit history. However, the uneven distribution of visitations over time and space, namely the long-tail problem in spatial distribution, makes it difficult for AI models to predict those POIs that are less visited by humans. In light of this issue, we propose the $\underline{\bf{Lo}}$ng-$\underline{\bf{T}}$ail Adjusted $\underline{\bf{Next}}$ POI Prediction (LoTNext) framework for mobility prediction, combining a Long-Tailed Graph Adjustment module to reduce the impact of the long-tailed nodes in the user-POI interaction graph and a novel Long-Tailed Loss Adjustment module to adjust loss by logit score and sample weight adjustment strategy. Also, we employ the auxiliary prediction task to enhance generalization and accuracy. Our experiments with two real-world trajectory datasets demonstrate that LoTNext significantly surpasses existing state-of-the-art works. Our code is available at https://github.com/Yukayo/LoTNext. | Taming the Long Tail in Human Mobility Prediction | [
"Xiaohang Xu",
"Renhe Jiang",
"Chuang Yang",
"zipei fan",
"Kaoru Sezaki"
] | NeurIPS.cc/2024/Conference | 2410.14970 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wT2KhEb97a | @inproceedings{
zhou2024iterative,
title={Iterative Methods via Locally Evolving Set Process},
author={Baojian Zhou and Yifan Sun and Reza Babanezhad Harikandeh and Xingzhi Guo and Deqing Yang and Yanghua Xiao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wT2KhEb97a}
} | Given the damping factor $\alpha$ and precision tolerance $\epsilon$, \citet{andersen2006local} introduced Approximate Personalized PageRank (APPR), the \textit{de facto local method} for approximating the PPR vector, with runtime bounded by $\Theta(1/(\alpha\epsilon))$ independent of the graph size. Recently, Fountoulakis \& Yang asked whether faster local algorithms could be developed using $\tilde{\mathcal{O}}(1/(\sqrt{\alpha}\epsilon))$ operations. By noticing that APPR is a local variant of Gauss-Seidel, this paper explores the question of *whether standard iterative solvers can be effectively localized*. We propose to use the *locally evolving set process*, a novel framework to characterize the algorithm locality, and demonstrate that many standard solvers can be effectively localized. Let $\overline{\operatorname{vol}}{ (\mathcal S_t)}$ and $\overline{\gamma_t}$ be the running average of volume and the residual ratio of active nodes $\textstyle \mathcal{S_t}$ during the process. We show $\overline{\operatorname{vol}}{ (\mathcal S_t)}/\overline{\gamma_t} \leq 1/\epsilon$ and prove APPR admits a new runtime bound $\tilde{\mathcal{O}}(\overline{\operatorname{vol}}(\mathcal S_t)/(\alpha\overline{\gamma_t}))$ mirroring the actual performance. Furthermore, when the geometric mean of residual reduction is $\Theta(\sqrt{\alpha})$, then there exists $c \in (0,2)$ such that the local Chebyshev method has runtime $\tilde{\mathcal{O}}(\overline{\operatorname{vol}}(\mathcal{S_t})/(\sqrt{\alpha}(2-c)))$ without the monotonicity assumption. Numerical results confirm the efficiency of this novel framework and show up to a hundredfold speedup over corresponding standard solvers on real-world graphs. | Iterative Methods via Locally Evolving Set Process | [
"Baojian Zhou",
"Yifan Sun",
"Reza Babanezhad Harikandeh",
"Xingzhi Guo",
"Deqing Yang",
"Yanghua Xiao"
] | NeurIPS.cc/2024/Conference | 2410.15020 | [
"https://github.com/baojian/LocalCH"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wSqpNeMVLU | @inproceedings{
yin2024a,
title={A Theoretical Perspective for Speculative Decoding Algorithm},
author={Ming Yin and Minshuo Chen and Kaixuan Huang and Mengdi Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wSqpNeMVLU}
} | Transformer-based autoregressive sampling has been the major bottleneck for slowing down large language model inferences. One effective way to accelerate inference is Speculative Decoding, which employs a small model to sample a sequence of draft tokens and a large model to validate. Given its empirical effectiveness, the theoretical understanding of Speculative Decoding is falling behind. This paper tackles this gap by conceptualizing the decoding problem via markov chain abstraction and studying the key properties, output quality and inference acceleration, from a theoretical perspective. Our analysis covers the theoretical limits of speculative decoding, batch algorithms, and output quality-inference acceleration tradeoffs. Our results reveal the fundamental connections between different components of LLMs via total variation distances and show how they jointly affect the efficiency of decoding algorithms. | A Theoretical Perspective for Speculative Decoding Algorithm | [
"Ming Yin",
"Minshuo Chen",
"Kaixuan Huang",
"Mengdi Wang"
] | NeurIPS.cc/2024/Conference | 2411.00841 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wSpIdUXZYX | @inproceedings{
rahman2024pretraining,
title={Pretraining Codomain Attention Neural Operators for Solving Multiphysics {PDE}s},
author={Md Ashiqur Rahman and Robert Joseph George and Mogab Elleithy and Daniel Leibovici and Zongyi Li and Boris Bonev and Colin White and Julius Berner and Raymond A. Yeh and Jean Kossaifi and Kamyar Azizzadenesheli and Anima Anandkumar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wSpIdUXZYX}
} | Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs) due to complex geometries, interactions between physical variables, and the limited amounts of high-resolution training data.
To address these issues, we propose *Codomain Attention Neural Operator* (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
Specifically, we extend positional encoding, self-attention, and normalization layers to function spaces. CoDA-NO can learn representations of different PDE systems with a single model. We evaluate CoDA-NO's potential as a backbone for learning multiphysics PDEs over multiple systems by considering few-shot learning settings. On complex downstream tasks with limited data, such as fluid flow simulations, fluid-structure interactions, and Rayleigh-Bénard convection, we found CoDA-NO to outperform existing methods by over 36%. | Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs | [
"Md Ashiqur Rahman",
"Robert Joseph George",
"Mogab Elleithy",
"Daniel Leibovici",
"Zongyi Li",
"Boris Bonev",
"Colin White",
"Julius Berner",
"Raymond A. Yeh",
"Jean Kossaifi",
"Kamyar Azizzadenesheli",
"Anima Anandkumar"
] | NeurIPS.cc/2024/Conference | 2403.12553 | [
"https://github.com/ashiq24/coda-no"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wQpNG9JnPK | @inproceedings{
chen2024neural,
title={Neural Collapse Inspired Feature Alignment for Out-of-Distribution Generalization},
author={Zhikang Chen and Min Zhang and Sen Cui and Haoxuan Li and Gang Niu and Mingming Gong and Changshui Zhang and Kun Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wQpNG9JnPK}
} | The spurious correlation between the background features of the image and its label arises due to that the samples labeled with the same class in the training set often co-occurs with a specific background, which will cause the encoder to extract non-semantic features for classification, resulting in poor out-of-distribution generalization performance. Although many studies have been proposed to address this challenge, the semantic and spurious features are still difficult to accurately decouple from the original image and fail to achieve high performance with deep learning models. This paper proposes a novel perspective inspired by neural collapse to solve the spurious correlation problem through the alternate execution of environment partitioning and learning semantic masks. Specifically, we propose to assign an environment to each sample by learning a local model for each environment and using maximum likelihood probability. At the same time, we require that the learned semantic mask neurally collapses to the same simplex equiangular tight frame (ETF) in each environment after being applied to the original input. We conduct extensive experiments on four datasets, and the results demonstrate that our method significantly improves out-of-distribution performance. | Neural Collapse Inspired Feature Alignment for Out-of-Distribution Generalization | [
"Zhikang Chen",
"Min Zhang",
"Sen Cui",
"Haoxuan Li",
"Gang Niu",
"Mingming Gong",
"Changshui Zhang",
"Kun Zhang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wQiJNyPENt | @inproceedings{
teufel2024batched,
title={Batched Energy-Entropy acquisition for Bayesian Optimization},
author={Felix Teufel and Carsten Stahlhut and Jesper Ferkinghoff-Borg},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wQiJNyPENt}
} | Bayesian optimization (BO) is an attractive machine learning framework for performing sample-efficient global optimization of black-box functions. The optimization process is guided by an acquisition function that selects points to acquire in each round of BO. In batched BO, when multiple points are acquired in parallel, commonly used acquisition functions are often high-dimensional and intractable, leading to the use of sampling-based alternatives. We propose a statistical physics inspired acquisition function that can natively handle batches. Batched Energy-Entropy acquisition for BO (BEEBO) enables tight control of the explore-exploit trade-off of the optimization process and generalizes to heteroskedastic black-box problems. We demonstrate the applicability of BEEBO on a range of problems, showing competitive performance to existing acquisition functions. | Batched Energy-Entropy acquisition for Bayesian Optimization | [
"Felix Teufel",
"Carsten Stahlhut",
"Jesper Ferkinghoff-Borg"
] | NeurIPS.cc/2024/Conference | 2410.08804 | [
"https://github.com/novonordisk-research/BEE-BO"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wN5AgP0DJ0 | @inproceedings{
knigge2024spacetime,
title={Space-Time Continuous {PDE} Forecasting using Equivariant Neural Fields},
author={David M Knigge and David Wessels and Riccardo Valperga and Samuele Papa and Jan-Jakob Sonke and Erik J Bekkers and Stratis Gavves},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wN5AgP0DJ0}
} | Recently, Conditional Neural Fields (NeFs) have emerged as a powerful modelling paradigm for PDEs, by learning solutions as flows in the latent space of the Conditional NeF. Although benefiting from favourable properties of NeFs such as grid-agnosticity and space-time-continuous dynamics modelling, this approach limits the ability to impose known constraints of the PDE on the solutions -- such as symmetries or boundary conditions -- in favour of modelling flexibility. Instead, we propose a space-time continuous NeF-based solving framework that - by preserving geometric information in the latent space of the Conditional NeF - preserves known symmetries of the PDE. We show that modelling solutions as flows of pointclouds over the group of interest $G$ improves generalization and data-efficiency. Furthermore, we validate that our framework readily generalizes to unseen spatial and temporal locations, as well as geometric transformations of the initial conditions - where other NeF-based PDE forecasting methods fail -, and improve over baselines in a number of challenging geometries. | Space-Time Continuous PDE Forecasting using Equivariant Neural Fields | [
"David M Knigge",
"David Wessels",
"Riccardo Valperga",
"Samuele Papa",
"Jan-Jakob Sonke",
"Erik J Bekkers",
"Stratis Gavves"
] | NeurIPS.cc/2024/Conference | 2406.06660 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wK0Z49myyi | @inproceedings{
lin2024craym,
title={{CRAYM}: Neural Field Optimization via Camera {RAY} Matching},
author={Liqiang Lin and Wenpeng Wu and Chi-Wing Fu and Hao Zhang and Hui Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wK0Z49myyi}
} | We introduce camera ray matching (CRAYM) into the joint optimization of camera poses and neural fields from multi-view images. The optimized field, referred to as a feature volume, can be “probed” by the camera rays for novel view synthesis (NVS) and 3D geometry reconstruction. One key reason for matching camera rays, instead of pixels as in prior works, is that the camera rays can be parameterized by the feature volume to carry both geometric and photometric information. Multi-view consistencies involving the camera rays and scene rendering can be naturally integrated into the joint optimization and network training, to impose physically meaningful constraints to improve the final quality of both the geometric reconstruction and photorealistic rendering. We formulate our per-ray optimization and matched ray coherence by focusing on camera rays passing through keypoints in the input images to elevate both the efficiency and accuracy of scene correspondences. Accumulated ray features along the feature volume provide a means to discount the coherence constraint amid erroneous ray matching. We demonstrate the effectiveness of CRAYM for both NVS and geometry reconstruction, over dense- or sparse-view settings, with qualitative and quantitative comparisons to state-of-the-art alternatives. | CRAYM: Neural Field Optimization via Camera RAY Matching | [
"Liqiang Lin",
"Wenpeng Wu",
"Chi-Wing Fu",
"Hao Zhang",
"Hui Huang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wJaCsnT9UE | @inproceedings{
lu2024sharpnessdiversity,
title={Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance},
author={Haiquan Lu and Xiaotian Liu and Yefan Zhou and Qunli Li and Kurt Keutzer and Michael W. Mahoney and Yujun Yan and Huanrui Yang and Yaoqing Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wJaCsnT9UE}
} | Recent studies on deep ensembles have identified the sharpness of the local minima of individual learners and the diversity of the ensemble members as key factors in improving test-time performance. Building on this, our study investigates the interplay between sharpness and diversity within deep ensembles, illustrating their crucial role in robust generalization to both in-distribution (ID) and out-of-distribution (OOD) data. We discover a trade-off between sharpness and diversity: minimizing the sharpness in the loss landscape tends to diminish the diversity of individual members within the ensemble, adversely affecting the ensemble's improvement. The trade-off is justified through our rigorous theoretical analysis and verified empirically through extensive experiments. To address the issue of reduced diversity, we introduce SharpBalance, a novel training approach that balances sharpness and diversity within ensembles. Theoretically, we show that our training strategy achieves a better sharpness-diversity trade-off. Empirically, we conducted comprehensive evaluations in various data sets (CIFAR-10, CIFAR-100, TinyImageNet) and showed that SharpBalance not only effectively improves the sharpness-diversity trade-off but also significantly improves ensemble performance in ID and OOD scenarios. | Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance | [
"Haiquan Lu",
"Xiaotian Liu",
"Yefan Zhou",
"Qunli Li",
"Kurt Keutzer",
"Michael W. Mahoney",
"Yujun Yan",
"Huanrui Yang",
"Yaoqing Yang"
] | NeurIPS.cc/2024/Conference | 2407.12996 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wJAF8TGVUG | @inproceedings{
zhou2024smolsearch,
title={S-MolSearch: 3D Semi-supervised Contrastive Learning for Bioactive Molecule Search},
author={Gengmo Zhou and Zhen Wang and Feng Yu and Guolin Ke and Zhewei Wei and Zhifeng Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wJAF8TGVUG}
} | Virtual Screening is an essential technique in the early phases of drug discovery, aimed at identifying promising drug candidates from vast molecular libraries.
Recently, ligand-based virtual screening has garnered significant attention due to its efficacy in conducting extensive database screenings without relying on specific protein-binding site information.
Obtaining binding affinity data for complexes is highly expensive, resulting in a limited amount of available data that covers a relatively small chemical space. Moreover, these datasets contain a significant amount of inconsistent noise. It is challenging to identify an inductive bias that consistently maintains the integrity of molecular activity during data augmentation. To tackle these challenges, we propose S-MolSearch, the first framework to our knowledge, that leverages molecular 3D information and affinity information in semi-supervised contrastive learning for ligand-based virtual screening.
% S-MolSearch processes both labeled and unlabeled data, trains molecular structural encoders, and generates soft labels for unlabeled data, drawing on the principles of inverse optimal transport.
Drawing on the principles of inverse optimal transport, S-MolSearch efficiently processes both labeled and unlabeled data, training molecular structural encoders while generating soft labels for the unlabeled data.
This design allows S-MolSearch to adaptively utilize unlabeled data within the learning process.
Empirically, S-MolSearch demonstrates superior performance on widely-used benchmarks LIT-PCBA and DUD-E. It surpasses both structure-based and ligand-based virtual screening methods for AUROC, BEDROC and EF. | S-MolSearch: 3D Semi-supervised Contrastive Learning for Bioactive Molecule Search | [
"Gengmo Zhou",
"Zhen Wang",
"Feng Yu",
"Guolin Ke",
"Zhewei Wei",
"Zhifeng Gao"
] | NeurIPS.cc/2024/Conference | 2409.07462 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wIE991zhXH | @inproceedings{
p{\'a}sztor2024bandits,
title={Bandits with Preference Feedback: A Stackelberg Game Perspective},
author={Barna P{\'a}sztor and Parnian Kassraie and Andreas Krause},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wIE991zhXH}
} | Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries. This model allows for incorporating human feedback into online inference and optimization and has been employed in systems for tuning large language models.
The problem is fairly understood in toy settings with linear target functions or over finite small domains that limits practical interest.
Taking the next step, we consider infinite domains and kernelized rewards. In this setting, selecting a pair of actions is quite challenging and requires balancing exploration and exploitation at two levels: within the pair, and along the iterations of the algorithm.
We propose MaxMinLCB, which emulates this trade-off as a zero-sum Stackelberg game and chooses action pairs that are informative and have favorable reward values. MaxMinLCB consistently outperforms algorithms in the literature and satisfies an anytime-valid rate-optimal regret guarantee. This is owed to our novel preference-based confidence sequences for kernelized logistic estimators, which are of independent interest. | Bandits with Preference Feedback: A Stackelberg Game Perspective | [
"Barna Pásztor",
"Parnian Kassraie",
"Andreas Krause"
] | NeurIPS.cc/2024/Conference | 2406.16745 | [
"https://github.com/lasgroup/maxminlcb"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wHFaAH3E8z | @inproceedings{
tan2024fasme,
title={FasMe: Fast and Sample-efficient Meta Estimator for Precision Matrix Learning in Small Sample Settings},
author={Xiao Tan and Yiqin Wang and Yangyang Shen and Dian Shen and Meng Wang and Peibo Duan and Beilun Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wHFaAH3E8z}
} | Precision matrix estimation is a ubiquitous task featuring numerous applications such as rare disease diagnosis and neural connectivity exploration. However, this task becomes challenging in small sample settings, where the number of samples is significantly less than the number of dimensions, leading to unreliable estimates. Previous approaches either fail to perform well in small sample settings or suffer from inefficient estimation processes, even when incorporating meta-learning techniques.
To this end, we propose a novel approach FasMe for Fast and Sample-efficient Meta Precision Matrix Learning, which first extracts meta-knowledge through a multi-task learning diagram. Then, meta-knowledge constraints are applied using a maximum determinant matrix completion algorithm for the novel task. As a result, we reduce the sample size requirements to $O(\log p/K)$ per meta-training task and $O(\log\vert \mathcal{G}\vert)$ for the meta-testing task. Moreover, the hereby proposed model only needs $O(p \log\epsilon^{-1})$ time and $O(p)$ memory for converging to an $\epsilon$-accurate solution. On multiple synthetic and biomedical datasets, FasMe is at least ten times faster than the four baselines while promoting prediction accuracy in small sample settings. | FasMe: Fast and Sample-efficient Meta Estimator for Precision Matrix Learning in Small Sample Settings | [
"Xiao Tan",
"Yiqin Wang",
"Yangyang Shen",
"Dian Shen",
"Meng Wang",
"Peibo Duan",
"Beilun Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wGjSbaMsop | @inproceedings{
baumann2024algorithmic,
title={Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists},
author={Joachim Baumann and Celestine Mendler-D{\"u}nner},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wGjSbaMsop}
} | We investigate algorithmic collective action in transformer-based recommender systems. Our use case is a collective of fans aiming to promote the visibility of an underrepresented artist by strategically placing one of their songs in the existing playlists they control. We introduce two easily implementable strategies to select the position at which to insert the song and boost recommendations at test time. The strategies exploit statistical properties of the learner to leverage discontinuities in the recommendations, and the long-tail nature of song distributions. We evaluate the efficacy of our strategies using a publicly available recommender system model released by a major music streaming platform. Our findings reveal that even small collectives (controlling less than 0.01\% of the training data) can achieve up to $40\times$ more test time recommendations than songs with similar training set occurrences, on average. Focusing on the externalities of the strategy, we find that the recommendations of other songs are largely preserved, and the newly gained recommendations are distributed across various artists. Together, our findings demonstrate how carefully designed collective action strategies can be effective while not necessarily being adversarial. | Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists | [
"Joachim Baumann",
"Celestine Mendler-Dünner"
] | NeurIPS.cc/2024/Conference | 2404.04269 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wGP1tBCP1E | @inproceedings{
chen2024diffusion,
title={Diffusion Models are Certifiably Robust Classifiers},
author={Huanran Chen and Yinpeng Dong and Shitong Shao and Zhongkai Hao and Xiao Yang and Hang Su and Jun Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wGP1tBCP1E}
} | Generative learning, recognized for its effective modeling of data distributions, offers inherent advantages in handling out-of-distribution instances, especially for enhancing robustness to adversarial attacks. Among these, diffusion classifiers, utilizing powerful diffusion models, have demonstrated superior empirical robustness. However, a comprehensive theoretical understanding of their robustness is still lacking, raising concerns about their vulnerability to stronger future attacks. In this study, we prove that diffusion classifiers possess $O(1)$ Lipschitzness, and establish their certified robustness, demonstrating their inherent resilience. To achieve non-constant Lipschitzness, thereby obtaining much tighter certified robustness, we generalize diffusion classifiers to classify Gaussian-corrupted data. This involves deriving the evidence lower bounds (ELBOs) for these distributions, approximating the likelihood using the ELBO, and calculating classification probabilities via Bayes' theorem. Experimental results show the superior certified robustness of these Noised Diffusion Classifiers (NDCs). Notably, we achieve over 80\% and 70\% certified robustness on CIFAR-10 under adversarial perturbations with \(\ell_2\) norms less than 0.25 and 0.5, respectively, using a single off-the-shelf diffusion model without any additional data. | Diffusion Models are Certifiably Robust Classifiers | [
"Huanran Chen",
"Yinpeng Dong",
"Shitong Shao",
"Zhongkai Hao",
"Xiao Yang",
"Hang Su",
"Jun Zhu"
] | NeurIPS.cc/2024/Conference | 2402.02316 | [
"https://github.com/huanranchen/NoisedDiffusionClassifiers"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wFzIMbTsY7 | @inproceedings{
huang2024decision,
title={Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence Modeling},
author={Sili Huang and Jifeng Hu and Zhejian Yang and Liwei Yang and Tao Luo and Hechang Chen and Lichao Sun and Bo Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wFzIMbTsY7}
} | Recent works have shown the remarkable superiority of transformer models in reinforcement learning (RL), where the decision-making problem is formulated as sequential generation. Transformer-based agents could emerge with self-improvement in online environments by providing task contexts, such as multiple trajectories, called in-context RL. However, due to the quadratic computation complexity of attention in transformers, current in-context RL methods suffer from huge computational costs as the task horizon increases. In contrast, the Mamba model is renowned for its efficient ability to process long-term dependencies, which provides an opportunity for in-context RL to solve tasks that require long-term memory. To this end, we first implement Decision Mamba (DM) by replacing the backbone of Decision Transformer (DT). Then, we propose a Decision Mamba-Hybrid (DM-H) with the merits of transformers and Mamba in high-quality prediction and long-term memory. Specifically, DM-H first generates high-value sub-goals from long-term memory through the Mamba model. Then, we use sub-goals to prompt the transformer, establishing high-quality predictions. Experimental results demonstrate that DM-H achieves state-of-the-art in long and short-term tasks, such as D4RL, Grid World, and Tmaze benchmarks. Regarding efficiency, the online testing of DM-H in the long-term task is 28$\times$ times faster than the transformer-based baselines. | Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence Modeling | [
"Sili Huang",
"Jifeng Hu",
"Zhejian Yang",
"Liwei Yang",
"Tao Luo",
"Hechang Chen",
"Lichao Sun",
"Bo Yang"
] | NeurIPS.cc/2024/Conference | 2406.00079 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wDirCeTIoz | @inproceedings{
liu2024communication,
title={Communication Efficient Distributed Training with Distributed Lion},
author={Bo Liu and Lemeng Wu and Lizhang Chen and Kaizhao Liang and Jiaxu Zhu and Chen Liang and Raghuraman Krishnamoorthi and qiang liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wDirCeTIoz}
} | The Lion optimizer has been a promising competitor with the AdamW for training large AI models, with advantages in memory, computation, and sample efficiency. In this paper, we introduce Distributed Lion, an innovative adaptation of Lion for distributed training environments. Leveraging the sign operator in Lion, our Distributed Lion only requires to communicate binary or lower-precision vectors
between workers to the center server, significantly reducing the communication cost.
Our theoretical analysis confirms Distributed Lion's convergence properties. Empirical results demonstrate its robustness across a range of tasks, worker counts, and batch sizes, on both vision and language problems. Notably, Distributed Lion attains comparable performance to standard Lion or AdamW optimizers applied on aggregated gradients, but with significantly reduced communication bandwidth. This feature is particularly advantageous for training large models. In addition, we also demonstrate that \mavolion{} presents a more favorable performance-bandwidth balance compared to existing efficient distributed methods such as deep gradient compression and ternary gradients. | Communication Efficient Distributed Training with Distributed Lion | [
"Bo Liu",
"Lemeng Wu",
"Lizhang Chen",
"Kaizhao Liang",
"Jiaxu Zhu",
"Chen Liang",
"Raghuraman Krishnamoorthi",
"qiang liu"
] | NeurIPS.cc/2024/Conference | 2404.00438 | [
""
] | https://huggingface.co/papers/2404.00438 | 2 | 2 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=wDDvJzvvBR | @inproceedings{
devnani2024learning,
title={Learning Spatially-Aware Language and Audio Embeddings},
author={Bhavika Suresh Devnani and Skyler Seto and Zakaria Aldeneh and Alessandro Toso and YELENA MENYAYLENKO and Barry-John Theobald and Jonathan Sheaffer and Miguel Sarabia},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wDDvJzvvBR}
} | Humans can picture a sound scene given an imprecise natural language description. For example, it is easy to imagine an acoustic environment given a phrase like "the lion roar came from right behind me!". For a machine to have the same degree of comprehension, the machine must know what a lion is (semantic attribute), what the concept of "behind" is (spatial attribute) and how these pieces of linguistic information align with the semantic and spatial attributes of the sound (what a roar sounds like when its coming from behind).
State-of-the-art audio foundation models, such as CLAP, which learn to map between audio scenes and natural textual descriptions, are trained on non-spatial audio and text pairs, and hence lack spatial awareness. In contrast, sound event localization and detection models are limited to recognizing sounds from a fixed number of classes, and they localize the source to absolute position (e.g., 0.2m) rather than a position described using natural language (e.g., "next to me"). To address these gaps, we present ELSA (Embeddings for Language and Spatial Audio), a spatially aware-audio and text embedding model trained using multimodal contrastive learning. ELSA supports non-spatial audio, spatial audio, and open vocabulary text captions describing both the spatial and semantic components of sound. To train ELSA: (a) we spatially augment the audio and captions of three open-source audio datasets totaling 4,738 hours and 890,038 samples of audio comprised from 8,972 simulated spatial configurations, and (b) we design an encoder to capture the semantics of non-spatial audio, and the semantics and spatial attributes of spatial audio using contrastive learning. ELSA is a single model that is competitive with state-of-the-art for both semantic retrieval and 3D source localization. In particular, ELSA achieves +2.8\% mean audio-to-text and text-to-audio R@1 above the LAION-CLAP baseline, and outperforms by -11.6° mean-absolute-error in 3D source localization over the SeldNET baseline on the TUT Sound Events 2018 benchmark. Moreover, we show that the representation-space of ELSA is structured, enabling swapping of direction of audio via vector arithmetic of two directional text embeddings. | Learning Spatially-Aware Language and Audio Embeddings | [
"Bhavika Suresh Devnani",
"Skyler Seto",
"Zakaria Aldeneh",
"Alessandro Toso",
"YELENA MENYAYLENKO",
"Barry-John Theobald",
"Jonathan Sheaffer",
"Miguel Sarabia"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wBzvYh3PRA | @inproceedings{
sun2024factorsim,
title={FactorSim: Generative Simulation via Factorized Representation},
author={Fan-Yun Sun and Harini S I and Angela Yi and Yihan Zhou and Alex Zook and Jonathan Tremblay and Logan Cross and Jiajun Wu and Nick Haber},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wBzvYh3PRA}
} | Generating simulations to train intelligent agents in game-playing and robotics from natural language input, user input, or task documentation remains an open-ended challenge. Existing approaches focus on parts of this challenge, such as generating reward functions or task hyperparameters. Unlike previous work, we introduce FACTORSIM that generates full simulations in code from language input that can be used to train agents. Exploiting the structural modularity specific to coded simulations, we propose to use a factored partially observable Markov decision process representation that allows us to reduce context dependence during each step of the generation. For evaluation, we introduce a generative simulation benchmark that assesses the generated simulation code’s accuracy and effectiveness in facilitating zero-shot transfers in reinforcement learning settings. We show that FACTORSIM outperforms existing methods in generating simulations regarding prompt alignment (i.e., accuracy), zero-shot transfer abilities, and human evaluation. We also demonstrate its effectiveness in generating robotic tasks. | FactorSim: Generative Simulation via Factorized Representation | [
"Fan-Yun Sun",
"Harini S I",
"Angela Yi",
"Yihan Zhou",
"Alex Zook",
"Jonathan Tremblay",
"Logan Cross",
"Jiajun Wu",
"Nick Haber"
] | NeurIPS.cc/2024/Conference | 2409.17652 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=wBtmN8SZ2B | @inproceedings{
sinha2024learning,
title={Learning Structured Representations with Hyperbolic Embeddings},
author={Aditya Sinha and Siqi Zeng and Makoto Yamada and Han Zhao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wBtmN8SZ2B}
} | Most real-world datasets consist of a natural hierarchy between classes or an inherent label structure that is either already available or can be constructed cheaply. However, most existing representation learning methods ignore this hierarchy, treating labels as permutation invariant. Recent work [Zeng et al., 2022] proposes using this structured information explicitly, but the use of Euclidean distance may distort the underlying semantic context [Chen et al., 2013]. In this work, motivated by the advantage of hyperbolic spaces in modeling hierarchical relationships, we propose a novel approach HypStructure: a Hyperbolic Structured regularization approach to accurately embed the label hierarchy into the learned representations. HypStructure is a simple-yet-effective regularizer that consists of a hyperbolic tree-based representation loss along with a centering loss, and can be combined with any standard task loss to learn hierarchy-informed features. Extensive experiments on several large-scale vision benchmarks demonstrate the efficacy of HypStructure in reducing distortion and boosting generalization performance especially under low dimensional scenarios. For a better understanding of structured representation, we perform eigenvalue analysis that links the representation geometry to improved Out-of-Distribution (OOD) detection performance seen empirically. | Learning Structured Representations with Hyperbolic Embeddings | [
"Aditya Sinha",
"Siqi Zeng",
"Makoto Yamada",
"Han Zhao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=wAqdvcK1Fv | @inproceedings{
schr{\"o}der2024energybased,
title={Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces},
author={Tobias Schr{\"o}der and Zijing Ou and Yingzhen Li and Andrew B. Duncan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=wAqdvcK1Fv}
} | Energy-based models (EBMs) offer a flexible framework for probabilistic modelling across various data domains. However, training EBMs on data in discrete or mixed state spaces poses significant challenges due to the lack of robust and fast sampling methods. In this work, we propose to train discrete EBMs with Energy Discrepancy, a loss function which only requires the evaluation of the energy function at data points and their perturbed counterparts, thus eliminating the need for Markov chain Monte Carlo. We introduce perturbations of the data distribution by simulating a diffusion process on the discrete state space endowed with a graph structure. This allows us to inform the choice of perturbation from the structure of the modelled discrete variable, while the continuous time parameter enables fine-grained control of the perturbation. Empirically, we demonstrate the efficacy of the proposed approaches in a wide range of applications, including the estimation of discrete densities with non-binary vocabulary and binary image modelling. We also introduce the first application of EBMs to tabular data sets with applications in synthetic data generation and calibrated classification. | Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces | [
"Tobias Schröder",
"Zijing Ou",
"Yingzhen Li",
"Andrew B. Duncan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=w6vbfSC1y0 | @inproceedings{
yu2024selfcalibrated,
title={Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection},
author={Geng Yu and Jianing Zhu and Jiangchao Yao and Bo Han},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=w6vbfSC1y0}
} | Out-of-distribution (OOD) detection is crucial for deploying reliable machine learning models in open-world applications. Recent advances in CLIP-based OOD detection have shown promising results via regularizing prompt tuning with OOD features extracted from ID data. However, the irrelevant context mined from ID data can be spurious due to the inaccurate foreground-background decomposition, thus limiting the OOD detection performance. In this work, we propose a novel framework, namely, \textit{Self-Calibrated Tuning (SCT)}, to mitigate this problem for effective OOD detection with only the given few-shot ID data. Specifically, SCT introduces modulating factors respectively on the two components of the original learning objective. It adaptively directs the optimization process between the two tasks during training on data with different prediction uncertainty to calibrate the influence of OOD regularization, which is compatible with many prompt tuning based OOD detection methods. Extensive experiments and analyses have been conducted to characterize and demonstrate the effectiveness of the proposed SCT. The code is publicly available at: https://github.com/tmlr-group/SCT. | Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection | [
"Geng Yu",
"Jianing Zhu",
"Jiangchao Yao",
"Bo Han"
] | NeurIPS.cc/2024/Conference | 2411.03359 | [
"https://github.com/tmlr-group/sct"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=w6q46IslSR | @inproceedings{
yang2024training,
title={Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis},
author={Hongru Yang and Bhavya Kailkhura and Zhangyang Wang and Yingbin Liang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=w6q46IslSR}
} | Understanding the training dynamics of transformers is important to explain the impressive capabilities behind large language models.
In this work, we study the dynamics of training a shallow transformer on a task of recognizing co-occurrence of two designated words. In the literature of studying training dynamics of transformers, several simplifications are commonly adopted such as weight reparameterization, attention linearization, special initialization, and lazy regime. In contrast, we analyze the gradient flow dynamics of simultaneously training three attention matrices and a linear MLP layer from random initialization, and provide a framework of analyzing such dynamics via a coupled dynamical system. We establish near minimum loss and characterize the attention model after training. We discover that gradient flow serves as an inherent mechanism that naturally divide the training process into two phases. In Phase 1, the linear MLP quickly aligns with the two target signals for correct classification, whereas the softmax attention remains almost unchanged. In Phase 2, the attention matrices and the MLP evolve jointly to enlarge the classification margin and reduce the loss to a near minimum value. Technically, we prove a novel property of the gradient flow, termed \textit{automatic balancing of gradients}, which enables the loss values of different samples to decrease almost at the same rate and further facilitates the proof of near minimum training loss. We also conduct experiments to verify our theoretical results. | Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis | [
"Hongru Yang",
"Bhavya Kailkhura",
"Zhangyang Wang",
"Yingbin Liang"
] | NeurIPS.cc/2024/Conference | 2410.09605 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=w67vRHZF13 | @inproceedings{
chow2024unified,
title={Unified Generative and Discriminative Training for Multi-modal Large Language Models},
author={Wei Chow and Juncheng Li and Qifan Yu and Kaihang Pan and Hao Fei and Zhiqi Ge and Shuai Yang and Siliang Tang and Hanwang Zhang and Qianru Sun},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=w67vRHZF13}
} | In recent times, Vision-Language Models (VLMs) have been trained under two predominant paradigms. Generative training has enabled Multimodal Large Language Models (MLLMs) to tackle various complex tasks, yet issues such as hallucinations and weak object discrimination persist. Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval, yet struggles with complex scenarios requiring fine-grained semantic differentiation. This paper addresses these challenges by proposing a unified approach that integrates the strengths of both paradigms. Considering interleaved image-text sequences as the general format of input samples, we introduce a structure-induced training strategy that imposes semantic relationships between input samples and the MLLM’s hidden state. This approach enhances the MLLM’s ability to capture global semantics and distinguish fine-grained semantics. By leveraging dynamic sequence alignment within the Dynamic Time Warping framework and integrating a novel kernel for fine-grained semantic differentiation, our method effectively balances generative and discriminative tasks. Extensive experiments demonstrate the effectiveness of our approach, achieving state-of-the-art results in multiple generative tasks, especially those requiring cognitive and discrimination abilities. Additionally, our method surpasses discriminative benchmarks in interleaved and fine-grained retrieval tasks. By employing a retrieval-augmented generation strategy, our approach further enhances performance in some generative tasks within one model, offering a promising direction for future research in vision-language modeling. | Unified Generative and Discriminative Training for Multi-modal Large Language Models | [
"Wei Chow",
"Juncheng Li",
"Qifan Yu",
"Kaihang Pan",
"Hao Fei",
"Zhiqi Ge",
"Shuai Yang",
"Siliang Tang",
"Hanwang Zhang",
"Qianru Sun"
] | NeurIPS.cc/2024/Conference | 2411.00304 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=w50ICQC6QJ | @inproceedings{
liu2024discovery,
title={Discovery of the Hidden World with Large Language Models},
author={Chenxi Liu and Yongqiang Chen and Tongliang Liu and Mingming Gong and James Cheng and Bo Han and Kun Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=w50ICQC6QJ}
} | Revealing the underlying causal mechanisms in the real world is the key to the development of science. Despite the progress in the past decades, traditional causal discovery approaches (CDs) mainly rely on high-quality measured variables, usually given by human experts, to find causal relations. The lack of well-defined high-level variables in many real-world applications has already been a longstanding roadblock to a broader application of CDs. To this end, this paper presents Causal representatiOn AssistanT (COAT) that introduces large language models (LLMs) to bridge the gap. LLMs are trained on massive observations of the world and have demonstrated great capability in extracting key information from unstructured data. Therefore, it is natural to employ LLMs to assist with proposing useful high-level factors and crafting their measurements. Meanwhile, COAT also adopts CDs to find causal relations among the identified variables as well as to provide feedback to LLMs to iteratively refine the proposed factors. We show that LLMs and CDs are mutually beneficial and the constructed feedback provably also helps with the factor proposal. We construct and curate several synthetic and real-world benchmarks including analysis of human reviews and diagnosis of neuropathic and brain tumors, to comprehensively evaluate COAT. Extensive empirical results confirm the effectiveness and reliability of COAT with significant improvements. | Discovery of the Hidden World with Large Language Models | [
"Chenxi Liu",
"Yongqiang Chen",
"Tongliang Liu",
"Mingming Gong",
"James Cheng",
"Bo Han",
"Kun Zhang"
] | NeurIPS.cc/2024/Conference | 2402.03941 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=w4AnTVxAO9 | @inproceedings{
liu2024can,
title={Can Language Models Learn to Skip Steps?},
author={Tengxiao Liu and Qipeng Guo and Xiangkun Hu and Cheng Jiayang and Yue Zhang and Xipeng Qiu and Zheng Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=w4AnTVxAO9}
} | Trained on vast corpora of human language, language models demonstrate emergent human-like reasoning abilities. Yet they are still far from true intelligence, which opens up intriguing opportunities to explore the parallels of humans and model behaviors. In this work, we study the ability to skip steps in reasoning—a hallmark of human expertise developed through practice. Unlike humans, who may skip steps to enhance efficiency or to reduce cognitive load, models do not inherently possess such motivations to minimize reasoning steps. To address this, we introduce a controlled framework that stimulates step-skipping behavior by iteratively refining models to generate shorter and accurate reasoning paths. Empirical results indicate that models can develop the step skipping ability under our guidance. Moreover, after fine-tuning on expanded datasets that include both complete and skipped reasoning sequences, the models can not only resolve tasks with increased efficiency without sacrificing accuracy, but also exhibit comparable and even enhanced generalization capabilities in out-of-domain scenarios. Our work presents the first exploration into human-like step-skipping ability and provides fresh perspectives on how such cognitive abilities can benefit AI models. | Can Language Models Learn to Skip Steps? | [
"Tengxiao Liu",
"Qipeng Guo",
"Xiangkun Hu",
"Cheng Jiayang",
"Yue Zhang",
"Xipeng Qiu",
"Zheng Zhang"
] | NeurIPS.cc/2024/Conference | 2411.01855 | [
"https://github.com/tengxiaoliu/LM_skip"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=w3JCTBRduf | @inproceedings{
tsikouras2024optimization,
title={Optimization Can Learn Johnson Lindenstrauss Embeddings},
author={Nikos Tsikouras and Constantine Caramanis and Christos Tzamos},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=w3JCTBRduf}
} | Embeddings play a pivotal role across various disciplines, offering compact representations of complex data structures. Randomized methods like Johnson-Lindenstrauss (JL) provide state-of-the-art and essentially unimprovable theoretical guarantees for achieving such representations. These guarantees are worst-case and in particular, neither the analysis, ${\textit{nor the algorithm}}$, takes into account any potential structural information of the data. The natural question is: must we randomize? Could we instead use an optimization-based approach, working directly with the data? A first answer is no: as we show, the distance-preserving objective of JL has a non-convex landscape over the space of projection matrices, with many bad stationary points. But this is not the final answer.
We present a novel method motivated by diffusion models, that circumvents this fundamental challenge: rather than performing optimization directly over the space of projection matrices, we use optimization over the larger space of $\textit{random solution samplers}$, gradually reducing the variance of the sampler. We show that by moving through this larger space, our objective converges to a deterministic (zero variance) solution, avoiding bad stationary points.
This method can also be seen as an optimization-based derandomization approach, and is an idea and method that we believe can be applied to many other problems. | Optimization Can Learn Johnson Lindenstrauss Embeddings | [
"Nikos Tsikouras",
"Constantine Caramanis",
"Christos Tzamos"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=w2L3Ll1jbV | @inproceedings{
watkins2024adversarially,
title={Adversarially Robust Multi-task Representation Learning},
author={Austin Watkins and Thanh Nguyen-Tang and Enayat Ullah and Raman Arora},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=w2L3Ll1jbV}
} | We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task.
In particular, we consider a multi-task representation learning (MTRL) setting, i.e., we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a
deep neural network).
In this general setting, we provide rates on~the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses.
These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments.
Additionally, we provide novel rates for the single-task setting. | Adversarially Robust Multi-task Representation Learning | [
"Austin Watkins",
"Thanh Nguyen-Tang",
"Enayat Ullah",
"Raman Arora"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=w28i9oe9Xr | @inproceedings{
tao2024high,
title={High Rank Path Development: an approach to learning the filtration of stochastic processes},
author={Jiajie Tao and Hao Ni and Chong Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=w28i9oe9Xr}
} | Since the weak convergence for stochastic processes does not account for the growth of information over time which is represented by the underlying filtration, a slightly erroneous stochastic model in weak topology may cause huge loss in multi-periods decision making problems. To address such discontinuities, Aldous introduced the extended weak convergence, which can fully characterise all essential properties, including the filtration, of stochastic processes; however, it was considered to be hard to find efficient numerical implementations. In this paper, we introduce a novel metric called High Rank PCF Distance (HRPCFD) for extended weak convergence based on the high rank path development method from rough path theory, which also defines the characteristic function for measure-valued processes. We then show that such HRPCFD admits many favourable analytic properties which allows us to design an efficient algorithm for training HRPCFD from data and construct the HRPCF-GAN by using HRPCFD as the discriminator for conditional time series generation. Our numerical experiments on both hypothesis testing and generative modelling validate the out-performance of our approach compared with several state-of-the-art methods, highlighting its potential in broad applications of synthetic time series generation and in addressing classic financial and economic challenges, such as optimal stopping or utility maximisation problems. Code is available at https://github.com/DeepIntoStreams/High-Rank-PCF-GAN.git. | High Rank Path Development: an approach to learning the filtration of stochastic processes | [
"Jiajie Tao",
"Hao Ni",
"Chong Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vymkuBMLlh | @inproceedings{
rahman2024conditional,
title={Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand},
author={Md Musfiqur Rahman and Matt Jordan and Murat Kocaoglu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vymkuBMLlh}
} | Causal inference from observational data plays critical role in many applications in trustworthy machine learning.
While sound and complete algorithms exist to compute causal effects, many of them assume access to conditional likelihoods,
which is difficult to estimate for high-dimensional (particularly image) data. Researchers have alleviated this issue by simulating causal relations with neural models. However, when we have high-dimensional variables in the causal graph along with some unobserved confounders, no existing work can effectively sample from the un/conditional interventional distributions. In this work, we show how to sample from any identifiable interventional distribution given an arbitrary causal graph through a sequence of push-forward computations of conditional generative models, such as diffusion models. Our proposed algorithm follows the recursive steps of the existing likelihood-based identification algorithms to train a set of feed-forward models, and connect them in a specific way to sample from the desired distribution. We conduct experiments on a Colored MNIST dataset having both the treatment ($X$) and the target variables ($Y$) as images and sample from $P(y|do(x))$. Our algorithm also enables us to conduct a causal analysis to evaluate spurious correlations among input features of generative models pre-trained on the CelebA dataset. Finally, we generate high-dimensional interventional samples from the MIMIC-CXR dataset involving text and image variables. | Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand | [
"Md Musfiqur Rahman",
"Matt Jordan",
"Murat Kocaoglu"
] | NeurIPS.cc/2024/Conference | [
"https://github.com/musfiqshohan/idgen"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vx4NgdyyVG | @inproceedings{
luo2024revive,
title={Revive Re-weighting in Imbalanced Learning by Density Ratio Estimation},
author={Jiaan Luo and Feng Hong and Jiangchao Yao and Bo Han and Ya Zhang and Yanfeng Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vx4NgdyyVG}
} | In deep learning, model performance often deteriorates when trained on highly imbalanced datasets, especially when evaluation metrics require robust generalization across underrepresented classes. To address the challenges posed by imbalanced data distributions, this study introduces a novel method utilizing density ratio estimation for dynamic class weight adjustment, termed as Re-weighting with Density Ratio (RDR). Our method adaptively adjusts the importance of each class during training, mitigates overfitting on dominant classes and enhances model adaptability across diverse datasets. Extensive experiments conducted on various large scale benchmark datasets validate the effectiveness of our method. Results demonstrate substantial improvements in generalization capabilities, particularly under severely imbalanced conditions. | Revive Re-weighting in Imbalanced Learning by Density Ratio Estimation | [
"Jiaan Luo",
"Feng Hong",
"Jiangchao Yao",
"Bo Han",
"Ya Zhang",
"Yanfeng Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vwgWbCxeAQ | @inproceedings{
zhang2024rethinking,
title={Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective},
author={Yanan Zhang and Jiangmeng Li and Lixiang Liu and Wenwen Qiang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vwgWbCxeAQ}
} | Foundational Vision-Language models such as CLIP have exhibited impressive generalization in downstream tasks. However, CLIP suffers from a two-level misalignment issue, i.e., task misalignment and data misalignment, when adapting to specific tasks. Soft prompt tuning has mitigated the task misalignment, yet the data misalignment remains a challenge. To analyze the impacts of the data misalignment, we revisit the pre-training and adaptation processes of CLIP and develop a structural causal model. We discover that while we expect to capture task-relevant information for downstream tasks accurately, the task-irrelevant knowledge impacts the prediction results and hampers the modeling of the true relationships between the images and the predicted classes. As task-irrelevant knowledge is unobservable, we leverage the front-door adjustment and propose Causality-Guided Semantic Decoupling and Classification (CDC) to mitigate the interference of task-irrelevant knowledge. Specifically, we decouple semantics contained in the data of downstream tasks and perform classification based on each semantic. Furthermore, we employ the Dempster-Shafer evidence theory to evaluate the uncertainty of each prediction generated by diverse semantics. Experiments conducted in multiple different settings have consistently demonstrated the effectiveness of CDC. | Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective | [
"Yanan Zhang",
"Jiangmeng Li",
"Lixiang Liu",
"Wenwen Qiang"
] | NeurIPS.cc/2024/Conference | 2410.12816 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vvpewjtnvm | @inproceedings{
li2024low,
title={Low Precision Local Training is Enough for Federated Learning},
author={Zhiwei Li and Yiqiu LI and Binbin Lin and Zhongming Jin and WEIZHONG ZHANG},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vvpewjtnvm}
} | Federated Learning (FL) is a prevalent machine learning paradigm designed to address challenges posed by heterogeneous client data while preserving data privacy.
Unlike distributed training, it typically orchestrates resource-constrained edge devices to communicate via a low-bandwidth communication network with a central server. This urges the development of more computation and communication efficient training algorithms. In this paper, we propose an efficient FL paradigm, where the local models in the clients are trained with low-precision operations and communicated with the server in low precision format, while only the model aggregation in the server is performed with high-precision computation. We surprisingly find that high precision models can be recovered from the low precision local models with proper aggregation in the server.
In this way, both the workload in the client-side and the communication cost can be significantly reduced. We theoretically show that our proposed paradigm can converge to the optimal solution as the training goes on, which demonstrates that low precision local training is enough for FL. Our paradigm can be integrated with existing FL algorithms flexibly. Experiments across extensive benchmarks are conducted to showcase the effectiveness of our proposed method. Notably, the models trained by our method with the precision as low as 8 bits are comparable to those from the full precision training. As a by-product, we show that low precision local training can relieve the over-fitting issue in local training, which under heterogeneous client data can cause the client models drift further away from each other and lead to the failure in model aggregation. | Low Precision Local Training is Enough for Federated Learning | [
"Zhiwei Li",
"Yiqiu LI",
"Binbin Lin",
"Zhongming Jin",
"WEIZHONG ZHANG"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vunJCq9PwU | @inproceedings{
li2024great,
title={{GREAT} Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models},
author={ZAITANG LI and Pin-Yu Chen and Tsung-Yi Ho},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vunJCq9PwU}
} | Current studies on adversarial robustness mainly focus on aggregating \textit{local} robustness results from a set of data samples to evaluate and rank different models. However, the local statistics may not well represent the true \textit{global} robustness of the underlying unknown data distribution. To address this challenge, this paper makes the first attempt to present a new framework, called \textit{GREAT Score}, for global robustness evaluation of adversarial perturbation using generative models. Formally, GREAT Score carries the physical meaning of a global statistic capturing a mean certified attack-proof perturbation level over all samples drawn from a generative model. For finite-sample evaluation, we also derive a probabilistic guarantee on the sample complexity and the difference between the sample mean and the true mean. GREAT Score has several advantages: (1) Robustness evaluations using GREAT Score are efficient and scalable to large models, by sparing the need of running adversarial attacks. In particular, we show high correlation and significantly reduced computation cost of GREAT Score when compared to the attack-based model ranking on RobustBench \cite{croce2021robustbench}. (2) The use of generative models facilitates the approximation of the unknown data distribution. In our ablation study with different generative adversarial networks (GANs), we observe consistency between global robustness evaluation and the quality of GANs. (3) GREAT Score can be used for remote auditing of privacy-sensitive black-box models, as demonstrated by our robustness evaluation on several online facial recognition services. | GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models | [
"ZAITANG LI",
"Pin-Yu Chen",
"Tsung-Yi Ho"
] | NeurIPS.cc/2024/Conference | 2304.09875 | [
"https://github.com/ibm/great-score"
] | https://huggingface.co/papers/2304.09875 | 0 | 0 | 0 | 3 | [] | [] | [
"TrustSafeAI/GREAT-Score"
] | [] | [] | [
"TrustSafeAI/GREAT-Score"
] | 1 | poster |
null | https://openreview.net/forum?id=vtRotUd539 | @inproceedings{
beaglehole2024average,
title={Average gradient outer product as a mechanism for deep neural collapse},
author={Daniel Beaglehole and Peter S{\'u}ken{\'\i}k and Marco Mondelli and Mikhail Belkin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vtRotUd539}
} | Deep Neural Collapse (DNC) refers to the surprisingly rigid structure of the data representations in the final layers of Deep Neural Networks (DNNs). Though the phenomenon has been measured in a variety of settings, its emergence is typically explained via data-agnostic approaches, such as the unconstrained features model. In this work, we introduce a data-dependent setting where DNC forms due to feature learning through the average gradient outer product (AGOP). The AGOP is defined with respect to a learned predictor and is equal to the uncentered covariance matrix of its input-output gradients averaged over the training dataset. Deep Recursive Feature Machines are a method that constructs a neural network by iteratively mapping the data with the AGOP and applying an untrained random feature map. We demonstrate theoretically and empirically that DNC occurs in Deep Recursive Feature Machines as a consequence of the projection with the AGOP matrix computed at each layer. We then provide evidence that this mechanism holds for neural networks more generally. We show that the right singular vectors and values of the weights can be responsible for the majority of within-class variability collapse for DNNs trained in the feature learning regime. As observed in recent work, this singular structure is highly correlated with that of the AGOP. | Average gradient outer product as a mechanism for deep neural collapse | [
"Daniel Beaglehole",
"Peter Súkeník",
"Marco Mondelli",
"Mikhail Belkin"
] | NeurIPS.cc/2024/Conference | 2402.13728 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vt2qkE1Oax | @inproceedings{
karazija2024learning,
title={Learning Segmentation from Point Trajectories},
author={Laurynas Karazija and Iro Laina and Christian Rupprecht and Andrea Vedaldi},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vt2qkE1Oax}
} | We consider the problem of segmenting objects in videos based on their motion and no other forms of supervision. Prior work has often approached this problem by using the principle of common fate, namely the fact that the motion of points that belong to the same object is strongly correlated. However, most authors have only considered instantaneous motion from optical flow. In this work, we present a way to train a segmentation network using long-term point trajectories as a supervisory signal to complement optical flow. The key difficulty is that long-term motion, unlike instantaneous motion, is difficult to model -- any parametric approximation is unlikely to capture complex motion patterns over long periods of time. We instead draw inspiration from subspace clustering approaches, proposing a loss function that seeks to group the trajectories into low-rank matrices where the motion of object points can be approximately explained as a linear combination of other point tracks. Our method outperforms the prior art on motion-based segmentation, which shows the utility of long-term motion and the effectiveness of our formulation. | Learning Segmentation from Point Trajectories | [
"Laurynas Karazija",
"Iro Laina",
"Christian Rupprecht",
"Andrea Vedaldi"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=vpEq2bzsS0 | @inproceedings{
zhu2024mote,
title={Mo{TE}: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer},
author={Minghao Zhu and Zhengpu Wang and Mengxian Hu and Ronghao Dang and Xiao Lin and Xun Zhou and Chengju Liu and Qijun Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vpEq2bzsS0}
} | Transferring visual-language knowledge from large-scale foundation models for video recognition has proved to be effective. To bridge the domain gap, additional parametric modules are added to capture the temporal information. However, zero-shot generalization diminishes with the increase in the number of specialized parameters, making existing works a trade-off between zero-shot and close-set performance. In this paper, we present MoTE, a novel framework that enables generalization and specialization to be balanced in one unified model. Our approach tunes a mixture of temporal experts to learn multiple task views with various degrees of data fitting. To maximally preserve the knowledge of each expert, we propose Weight Merging Regularization, which regularizes the merging process of experts in weight space. Additionally with temporal feature modulation to regularize the contribution of temporal feature during test. We achieve a sound balance between zero-shot and close-set video recognition tasks and obtain state-of-the-art or competitive results on various datasets, including Kinetics-400 \& 600, UCF, and HMDB. Code is available at https://github.com/ZMHH-H/MoTE. | MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer | [
"Minghao Zhu",
"Zhengpu Wang",
"Mengxian Hu",
"Ronghao Dang",
"Xiao Lin",
"Xun Zhou",
"Chengju Liu",
"Qijun Chen"
] | NeurIPS.cc/2024/Conference | 2410.10589 | [
"https://github.com/zmhh-h/mote"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=voJCpdlw53 | @inproceedings{
ren2024ultrapixel,
title={UltraPixel: Advancing Ultra High-Resolution Image Synthesis to New Peaks},
author={Jingjing Ren and Wenbo Li and Haoyu Chen and Renjing Pei and Bin Shao and Yong Guo and Long Peng and Fenglong Song and Lei Zhu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=voJCpdlw53}
} | Ultra-high-resolution image generation poses great challenges, such as increased semantic planning complexity and detail synthesis difficulties, alongside substantial training resource demands. We present UltraPixel, a novel architecture utilizing cascade diffusion models to generate high-quality images at multiple resolutions (\textit{e.g.}, 1K, 2K, and 4K) within a single model, while maintaining computational efficiency. UltraPixel leverages semantics-rich representations of lower-resolution images in a later denoising stage to guide the whole generation of highly detailed high-resolution images, significantly reducing complexity. Specifically, we introduce implicit neural representations for continuous upsampling and scale-aware normalization layers adaptable to various resolutions. Notably, both low- and high-resolution processes are performed in the most compact space, sharing the majority of parameters with less than 3$\%$ additional parameters for high-resolution outputs, largely enhancing training and inference efficiency. Our model achieves fast training with reduced data requirements, producing photo-realistic high-resolution images and demonstrating state-of-the-art performance in extensive experiments. | UltraPixel: Advancing Ultra High-Resolution Image Synthesis to New Peaks | [
"Jingjing Ren",
"Wenbo Li",
"Haoyu Chen",
"Renjing Pei",
"Bin Shao",
"Yong Guo",
"Long Peng",
"Fenglong Song",
"Lei Zhu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vo5LONGAdo | @inproceedings{
fang2024remixdit,
title={Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising},
author={Gongfan Fang and Xinyin Ma and Xinchao Wang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vo5LONGAdo}
} | Transformer-based diffusion models have achieved significant advancements across a variety of generative tasks. However, producing high-quality outputs typically necessitates large transformer models, which result in substantial training and inference overhead. In this work, we investigate an alternative approach involving multiple experts for denoising, and introduce RemixDiT, a novel method designed to enhance output quality at a low cost. The goal of RemixDiT is to craft N diffusion experts for different denoising timesteps, yet without the need for expensive training of N independent models. To achieve this, RemixDiT employs K basis models (where K < N) and utilizes learnable mixing coefficients to adaptively craft expert models. This design offers two significant advantages: first, although the total model size is increased, the model produced by the mixing operation shares the same architecture as a plain model, making the overall model as efficient as a standard diffusion transformer. Second, the learnable mixing adaptively allocates model capacity across timesteps, thereby effectively improving generation quality. Experiments conducted on the ImageNet dataset demonstrate that RemixDiT achieves promising results compared to standard diffusion transformers and other multiple-expert methods. | Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising | [
"Gongfan Fang",
"Xinyin Ma",
"Xinchao Wang"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vjw4TIf8Bo | @inproceedings{
lu2024padellmner,
title={PaDe{LLM}-{NER}: Parallel Decoding in Large Language Models for Named Entity Recognition},
author={Jinghui Lu and Yanjie Wang and Ziwei Yang and Xuejing Liu and Brian Mac Namee and Can Huang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vjw4TIf8Bo}
} | In this study, we aim to reduce generation latency for Named Entity Recognition (NER) with Large Language Models (LLMs). The main cause of high latency in LLMs is the sequential decoding process, which autoregressively generates all labels and mentions for NER, significantly increase the sequence length. To this end, we introduce Parallel Decoding in LLM for NE} (PaDeLLM-NER), a approach that integrates seamlessly into existing generative model frameworks without necessitating additional modules or architectural modifications. PaDeLLM-NER allows for the simultaneous decoding of all mentions, thereby reducing generation latency. Experiments reveal that PaDeLLM-NER significantly increases inference speed that is 1.76 to 10.22 times faster than the autoregressive approach for both English and Chinese. Simultaneously it maintains the quality of predictions as evidenced by the performance that is on par with the state-of-the-art across various datasets. All resources are available at https://github.com/GeorgeLuImmortal/PaDeLLM_NER. | PaDeLLM-NER: Parallel Decoding in Large Language Models for Named Entity Recognition | [
"Jinghui Lu",
"Yanjie Wang",
"Ziwei Yang",
"Xuejing Liu",
"Brian Mac Namee",
"Can Huang"
] | NeurIPS.cc/2024/Conference | 2402.04838 | [
"https://github.com/GeorgeLuImmortal/PaDeLLM_NER"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vjsd8Bcipv | @inproceedings{
wang2024epsilonsoftmax,
title={\${\textbackslash}epsilon\$-Softmax: Approximating One-Hot Vectors for Mitigating Label Noise},
author={Jialiang Wang and Xiong Zhou and Deming Zhai and Junjun Jiang and Xiangyang Ji and Xianming Liu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vjsd8Bcipv}
} | Noisy labels pose a common challenge for training accurate deep neural networks. To mitigate label noise, prior studies have proposed various robust loss functions to achieve noise tolerance in the presence of label noise, particularly symmetric losses. However, they usually suffer from the underfitting issue due to the overly strict symmetric condition. In this work, we propose a simple yet effective approach for relaxing the symmetric condition, namely **$\epsilon$-softmax**, which simply modifies the outputs of the softmax layer to approximate one-hot vectors with a controllable error $\epsilon$. Essentially, ***$\epsilon$-softmax** not only acts as an alternative for the softmax layer, but also implicitly plays the crucial role in modifying the loss function.* We prove theoretically that **$\epsilon$-softmax** can achieve noise-tolerant learning with controllable excess risk bound for almost any loss function. Recognizing that **$\epsilon$-softmax**-enhanced losses may slightly reduce fitting ability on clean datasets, we further incorporate them with one symmetric loss, thereby achieving a better trade-off between robustness and effective learning. Extensive experiments demonstrate the superiority of our method in mitigating synthetic and real-world label noise. | ϵ-Softmax: Approximating One-Hot Vectors for Mitigating Label Noise | [
"Jialiang Wang",
"Xiong Zhou",
"Deming Zhai",
"Junjun Jiang",
"Xiangyang Ji",
"Xianming Liu"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vjCFnYTg67 | @inproceedings{
zhou2024bileve,
title={Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature},
author={Tong Zhou and Xuandong Zhao and Xiaolin Xu and Shaolei Ren},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vjCFnYTg67}
} | Text watermarks for large language models (LLMs) have been commonly used to identify the origins of machine-generated content, which is promising for assessing liability when combating deepfake or harmful content. While existing watermarking techniques typically prioritize robustness against removal attacks, unfortunately, they are vulnerable to spoofing attacks: malicious actors can subtly alter the meanings of LLM-generated responses or even forge harmful content, potentially misattributing blame to the LLM developer. To overcome this, we introduce a bi-level signature scheme, Bileve, which embeds fine-grained signature bits for integrity checks (mitigating spoofing attacks) as well as a coarse-grained signal to trace text sources when the signature is invalid (enhancing detectability) via a novel rank-based sampling strategy. Compared to conventional watermark detectors that only output binary results, Bileve can differentiate 5 scenarios during detection, reliably tracing text provenance and regulating LLMs. The experiments conducted on OPT-1.3B and LLaMA-7B demonstrate the effectiveness of Bileve in defeating spoofing attacks with enhanced detectability. | Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature | [
"Tong Zhou",
"Xuandong Zhao",
"Xiaolin Xu",
"Shaolei Ren"
] | NeurIPS.cc/2024/Conference | 2406.01946 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vjAORqq71s | @inproceedings{
petersen2024newton,
title={Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms},
author={Felix Petersen and Christian Borgelt and Tobias Sutter and Hilde Kuehne and Oliver Deussen and Stefano Ermon},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vjAORqq71s}
} | When training neural networks with custom objectives, such as ranking losses and shortest-path losses, a common problem is that they are, per se, non-differentiable. A popular approach is to continuously relax the objectives to provide gradients, enabling learning. However, such differentiable relaxations are often non-convex and can exhibit vanishing and exploding gradients, making them (already in isolation) hard to optimize. Here, the loss function poses the bottleneck when training a deep neural network. We present Newton Losses, a method for improving the performance of existing hard to optimize losses by exploiting their second-order information via their empirical Fisher and Hessian matrices. Instead of training the neural network with second-order techniques, we only utilize the loss function's second-order information to replace it by a Newton Loss, while training the network with gradient descent. This makes our method computationally efficient. We apply Newton Losses to eight differentiable algorithms for sorting and shortest-paths, achieving significant improvements for less-optimized differentiable algorithms, and consistent improvements, even for well-optimized differentiable algorithms. | Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms | [
"Felix Petersen",
"Christian Borgelt",
"Tobias Sutter",
"Hilde Kuehne",
"Oliver Deussen",
"Stefano Ermon"
] | NeurIPS.cc/2024/Conference | 2410.19055 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vieIamY2Gi | @inproceedings{
sendera2024improved,
title={Improved off-policy training of diffusion samplers},
author={Marcin Sendera and Minsu Kim and Sarthak Mittal and Pablo Lemos and Luca Scimeca and Jarrid Rector-Brooks and Alexandre Adam and Yoshua Bengio and Nikolay Malkin},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vieIamY2Gi}
} | We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function. We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods (continuous generative flow networks). Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work. We also propose a novel exploration strategy for off-policy methods, based on local search in the target space with the use of a replay buffer, and show that it improves the quality of samples on a variety of target distributions. Our code for the sampling methods and benchmarks studied is made public at [this link](https://github.com/GFNOrg/gfn-diffusion) as a base for future work on diffusion models for amortized inference. | Improved off-policy training of diffusion samplers | [
"Marcin Sendera",
"Minsu Kim",
"Sarthak Mittal",
"Pablo Lemos",
"Luca Scimeca",
"Jarrid Rector-Brooks",
"Alexandre Adam",
"Yoshua Bengio",
"Nikolay Malkin"
] | NeurIPS.cc/2024/Conference | 2402.05098 | [
"https://github.com/gfnorg/gfn-diffusion"
] | https://huggingface.co/papers/2402.05098 | 1 | 0 | 0 | 9 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=vh9yEPLeyD | @inproceedings{
cheng2024can,
title={Can We Leave Deepfake Data Behind in Training Deepfake Detector?},
author={Jikang Cheng and Zhiyuan Yan and Ying Zhang and Yuhao Luo and Zhongyuan Wang and Chen Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vh9yEPLeyD}
} | The generalization ability of deepfake detectors is vital for their applications in real-world scenarios. One effective solution to enhance this ability is to train the models with manually-blended data, which we termed ''blendfake'', encouraging models to learn generic forgery artifacts like blending boundary. Interestingly, current SoTA methods utilize blendfake $\textit{without}$ incorporating any deepfake data in their training process. This is likely because previous empirical observations suggest that vanilla hybrid training (VHT), which combines deepfake and blendfake data, results in inferior performance to methods using only blendfake data (so-called "1+1<2"). Therefore, a critical question arises: Can we leave deepfake behind and rely solely on blendfake data to train an effective deepfake detector? Intuitively, as deepfakes also contain additional informative forgery clues ($\textit{e.g.,}$ deep generative artifacts), excluding all deepfake data in training deepfake detectors seems counter-intuitive. In this paper, we rethink the role of blendfake in detecting deepfakes and formulate the process from "real to blendfake to deepfake" to be a $\textit{progressive transition}$. Specifically, blendfake and deepfake can be explicitly delineated as the oriented pivot anchors between "real-to-fake" transitions. The accumulation of forgery information should be oriented and progressively increasing during this transition process. To this end, we propose an $\underline{O}$riented $\underline{P}$rogressive $\underline{R}$egularizor (OPR) to establish the constraints that compel the distribution of anchors to be discretely arranged. Furthermore, we introduce feature bridging to facilitate the smooth transition between adjacent anchors. Extensive experiments confirm that our design allows leveraging forgery information from both blendfake and deepfake effectively and comprehensively. Code is available at https://github.com/beautyremain/ProDet. | Can We Leave Deepfake Data Behind in Training Deepfake Detector? | [
"Jikang Cheng",
"Zhiyuan Yan",
"Ying Zhang",
"Yuhao Luo",
"Zhongyuan Wang",
"Chen Li"
] | NeurIPS.cc/2024/Conference | 2408.17052 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=veMnGKXvTx | @inproceedings{
zhang2024homology,
title={Homology Consistency Constrained Efficient Tuning for Vision-Language Models},
author={Huatian Zhang and Lei Zhang and Yongdong Zhang and Zhendong Mao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=veMnGKXvTx}
} | Efficient transfer learning has shown remarkable performance in tuning large-scale vision-language models (VLMs) toward downstream tasks with limited data resources. The key challenge of efficient transfer lies in adjusting image-text alignment to be task-specific while preserving pre-trained general knowledge. However, existing methods adjust image-text alignment merely on a set of observed samples, e.g., data set and external knowledge base, which cannot guarantee to keep the correspondence of general concepts between image and text latent manifolds without being disrupted and thereby a weak generalization of the adjusted alignment. In this work, we propose a Homology Consistency (HC) constraint for efficient transfer on VLMs, which explicitly constrains the correspondence of image and text latent manifolds through structural equivalence based on persistent homology in downstream tuning. Specifically, we build simplicial complex on the top of data to mimic the topology of latent manifolds, then track the persistence of the homology classes of topological features across multiple scales, and guide the directions of persistence tracks in image and text manifolds to coincide each other, with a deviating perturbation additionally. For practical application, we tailor the implementation of our proposed HC constraint for two main paradigms of adapter tuning. Extensive experiments on few-shot learning over 11 datasets and domain generalization demonstrate the effectiveness and robustness of our method. | Homology Consistency Constrained Efficient Tuning for Vision-Language Models | [
"Huatian Zhang",
"Lei Zhang",
"Yongdong Zhang",
"Zhendong Mao"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vcGEV6m5m2 | @inproceedings{
wan2024templatefree,
title={Template-free Articulated Gaussian Splatting for Real-time Reposable Dynamic View Synthesis},
author={Diwen Wan and Yuxiang Wang and Ruijie Lu and Gang Zeng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vcGEV6m5m2}
} | While novel view synthesis for dynamic scenes has made significant progress, capturing skeleton models of objects and re-posing them remains a challenging task. To tackle this problem, in this paper, we propose a novel approach to automatically discover the associated skeleton model for dynamic objects from videos without the need for object-specific templates. Our approach utilizes 3D Gaussian Splatting and superpoints to reconstruct dynamic objects. Treating superpoints as rigid parts, we can discover the underlying skeleton model through intuitive cues and optimize it using the kinematic model. Besides, an adaptive control strategy is applied to avoid the emergence of redundant superpoints. Extensive experiments demonstrate the effectiveness and efficiency of our method in obtaining re-posable 3D objects. Not only can our approach achieve excellent visual fidelity, but it also allows for the real-time rendering of high-resolution images. | Template-free Articulated Gaussian Splatting for Real-time Reposable Dynamic View Synthesis | [
"Diwen Wan",
"Yuxiang Wang",
"Ruijie Lu",
"Gang Zeng"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vYUx8j5KK2 | @inproceedings{
yu2024curriculum,
title={Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise},
author={Yeonguk Yu and Minhwan Ko and Sungho Shin and Kangmin Kim and Kyoobin Lee},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vYUx8j5KK2}
} | Deep neural networks have demonstrated remarkable performance in various vision tasks, but their success heavily depends on the quality of the training data. Noisy labels are a critical issue in medical datasets and can significantly degrade model performance. Previous clean sample selection methods have not utilized the well pre-trained features of vision foundation models (VFMs) and assumed that training begins from scratch. In this paper, we propose CUFIT, a curriculum fine-tuning paradigm of VFMs for medical image classification under label noise. Our method is motivated by the fact that linear probing of VFMs is relatively unaffected by noisy samples, as it does not update the feature extractor of the VFM, thus robustly classifying the training samples. Subsequently, curriculum fine-tuning of two adapters is conducted, starting with clean sample selection from the linear probing phase. Our experimental results demonstrate that CUFIT outperforms previous methods across various medical image benchmarks. Specifically, our method surpasses previous baselines by 5.0\%, 2.1\%, 4.6\%, and 5.8\% at a 40\% noise rate on the HAM10000, APTOS-2019, BloodMnist, and OrgancMnist datasets, respectively. Furthermore, we provide extensive analyses to demonstrate the impact of our method on noisy label detection. For instance, our method shows higher label precision and recall compared to previous approaches. Our work highlights the potential of leveraging VFMs in medical image classification under challenging conditions of noisy labels. | Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise | [
"Yeonguk Yu",
"Minhwan Ko",
"Sungho Shin",
"Kangmin Kim",
"Kyoobin Lee"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vWSll6M9pj | @inproceedings{
haliassos2024unified,
title={Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs},
author={Alexandros Haliassos and Rodrigo Mira and Honglie Chen and Zoe Landgraf and Stavros Petridis and Maja Pantic},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vWSll6M9pj}
} | Research in auditory, visual, and audiovisual speech recognition (ASR, VSR, and AVSR, respectively) has traditionally been conducted independently. Even recent self-supervised studies addressing two or all three tasks simultaneously tend to yield separate models, leading to disjoint inference pipelines with increased memory requirements and redundancies. This paper proposes unified training strategies for these systems. We demonstrate that training a single model for all three tasks enhances VSR and AVSR performance, overcoming typical optimisation challenges when training from scratch. Moreover, we introduce a greedy pseudo-labelling approach to more effectively leverage unlabelled samples, addressing shortcomings in related self-supervised methods. Finally, we develop a self-supervised pre-training method within our framework, proving its effectiveness alongside our semi-supervised approach. Despite using a single model for all tasks, our unified approach achieves state-of-the-art performance on LRS3 for ASR, VSR, and AVSR compared to recent methods. Code will be made publicly available. | Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs | [
"Alexandros Haliassos",
"Rodrigo Mira",
"Honglie Chen",
"Zoe Landgraf",
"Stavros Petridis",
"Maja Pantic"
] | NeurIPS.cc/2024/Conference | 2411.02256 | [
"https://github.com/ahaliassos/usr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vUrOuc6NR3 | @inproceedings{
cui2024dynamo,
title={DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control},
author={Zichen Jeff Cui and Hengkai Pan and Aadhithya Iyer and Siddhant Haldar and Lerrel Pinto},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vUrOuc6NR3}
} | Imitation learning has proven to be a powerful tool for training complex visuo-motor policies. However, current methods often require hundreds to thousands of expert demonstrations to handle high-dimensional visual observations. A key reason for this poor data efficiency is that visual representations are predominantly either pretrained on out-of-domain data or trained directly through a behavior cloning objective. In this work, we present DynaMo, a new in-domain, self-supervised method for learning visual representations. Given a set of expert demonstrations, we jointly learn a latent inverse dynamics model and a forward dynamics model over a sequence of image embeddings, predicting the next frame in latent space, without augmentations, contrastive sampling, or access to ground truth actions. Importantly, DynaMo does not require any out-of-domain data such as Internet datasets or cross-embodied datasets. On a suite of six simulated and real environments, we show that representations learned with DynaMo significantly improve downstream imitation learning performance over prior self-supervised learning objectives, and pretrained representations. Gains from using DynaMo hold across policy classes such as Behavior Transformer, Diffusion Policy, MLP, and nearest neighbors. Finally, we ablate over key components of DynaMo and measure its impact on downstream policy performance. Robot videos are best viewed at https://dynamo-ssl.github.io. | DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control | [
"Zichen Jeff Cui",
"Hengkai Pan",
"Aadhithya Iyer",
"Siddhant Haldar",
"Lerrel Pinto"
] | NeurIPS.cc/2024/Conference | 2409.12192 | [
""
] | https://huggingface.co/papers/2409.12192 | 0 | 4 | 3 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=vU512K8vrR | @inproceedings{
ke2024unveiling,
title={Unveiling Lo{RA} Intrinsic Ranks via Salience Analysis},
author={Wenjun Ke and Jiahao Wang and Peng Wang and Jiajun Liu and Dong Nie and Guozheng Li and Yining Li},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vU512K8vrR}
} | The immense parameter scale of large language models underscores the necessity for parameter-efficient fine-tuning methods. Methods based on Low-Rank Adaptation (LoRA) assume the low-rank characteristics of the incremental matrix and optimize the matrix obtained from low-rank decomposition. Although effective, these methods are constrained by a fixed and unalterable intrinsic rank, neglecting the variable importance of matrices. Consequently, methods for adaptive rank allocation are proposed, among which AdaLoRA demonstrates excellent fine-tuning performance. AdaLoRA conducts adaptation based on singular value decomposition (SVD), dynamically allocating intrinsic ranks according to importance. However, it still struggles to achieve a balance between fine-tuning effectiveness and efficiency, leading to limited rank allocation space. Additionally, the importance measurement focuses only on parameters with minimal impact on the loss, neglecting the dominant role of singular values in SVD-based matrices and the fluctuations during training. To address these issues, we propose SalientLoRA, which adaptively optimizes intrinsic ranks of LoRA via salience measurement. Firstly, during rank allocation, the salience measurement analyses the variation of singular value magnitudes across multiple time steps and establishes their inter-dependency relationships to assess the matrix importance. This measurement mitigates instability and randomness that may arise during importance assessment. Secondly, to achieve a balance between fine-tuning performance and efficiency, we propose an adaptive adjustment of time-series window, which adaptively controls the size of time-series for significance measurement and rank reduction during training, allowing for rapid rank allocation while maintaining training stability. This mechanism enables matrics to set a higher initial rank, thus expanding the allocation space for ranks. To evaluate the generality of our method across various tasks, we conduct experiments on natural language understanding (NLU), natural language generation (NLG), and large model instruction tuning tasks. Experimental results demonstrate the superiority of SalientLoRA, which outperforms state-of-the-art methods by 0.96\%-3.56\% on multiple datasets. Furthermore, as the rank allocation space expands, our method ensures fine-tuning efficiency, achieving a speed improvement of 94.5\% compared to AdaLoRA. The code is publicly available at https://github.com/Heyest/SalientLoRA. | Unveiling LoRA Intrinsic Ranks via Salience Analysis | [
"Wenjun Ke",
"Jiahao Wang",
"Peng Wang",
"Jiajun Liu",
"Dong Nie",
"Guozheng Li",
"Yining Li"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vU1SiBb57j | @inproceedings{
li2024learning,
title={Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient},
author={Zechu Li and Rickmer Krohn and Tao Chen and Anurag Ajay and Pulkit Agrawal and Georgia Chalvatzaki},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vU1SiBb57j}
} | Deep reinforcement learning (RL) algorithms typically parameterize the policy as a deep network that outputs either a deterministic action or a stochastic one modeled as a Gaussian distribution, hence restricting learning to a single behavioral mode. Meanwhile, diffusion models emerged as a powerful framework for multimodal learning. However, the use of diffusion policies in online RL is hindered by the intractability of policy likelihood approximation, as well as the greedy objective of RL methods that can easily skew the policy to a single mode. This paper presents Deep Diffusion Policy Gradient (DDiffPG), a novel actor-critic algorithm that learns from scratch multimodal policies parameterized as diffusion models while discovering and maintaining versatile behaviors. DDiffPG explores and discovers multiple modes through off-the-shelf unsupervised clustering combined with novelty-based intrinsic motivation. DDiffPG forms a multimodal training batch and utilizes mode-specific Q-learning to mitigate the inherent greediness of the RL objective, ensuring the improvement of the diffusion policy across all modes. Our approach further allows the policy to be conditioned on mode-specific embeddings to explicitly control the learned modes. Empirical studies validate DDiffPG's capability to master multimodal behaviors in complex, high-dimensional continuous control tasks with sparse rewards, also showcasing proof-of-concept dynamic online replanning when navigating mazes with unseen obstacles. Our project page is available at https://supersglzc.github.io/projects/ddiffpg/. | Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient | [
"Zechu Li",
"Rickmer Krohn",
"Tao Chen",
"Anurag Ajay",
"Pulkit Agrawal",
"Georgia Chalvatzaki"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vS5NC7jtCI | @inproceedings{
zhang2024adaneg,
title={AdaNeg: Adaptive Negative Proxy Guided {OOD} Detection with Vision-Language Models},
author={Yabin Zhang and Lei Zhang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vS5NC7jtCI}
} | Recent research has shown that pre-trained vision-language models are effective at identifying out-of-distribution (OOD) samples by using negative labels as guidance. However, employing consistent negative labels across different OOD datasets often results in semantic misalignments, as these text labels may not accurately reflect the actual space of OOD images. To overcome this issue, we introduce \textit{adaptive negative proxies}, which are dynamically generated during testing by exploring actual OOD images, to align more closely with the underlying OOD label space and enhance the efficacy of negative proxy guidance. Specifically, our approach utilizes a feature memory bank to selectively cache discriminative features from test images, representing the targeted OOD distribution. This facilitates the creation of proxies that can better align with specific OOD datasets. While task-adaptive proxies average features to reflect the unique characteristics of each dataset, the sample-adaptive proxies weight features based on their similarity to individual test samples, exploring detailed sample-level nuances. The final score for identifying OOD samples integrates static negative labels with our proposed adaptive proxies, effectively combining textual and visual knowledge for enhanced performance. Our method is training-free and annotation-free, and it maintains fast testing speed. Extensive experiments across various benchmarks demonstrate the effectiveness of our approach, abbreviated as AdaNeg. Notably, on the large-scale ImageNet benchmark, our AdaNeg significantly outperforms existing methods, with a 2.45\% increase in AUROC and a 6.48\% reduction in FPR95. Codes are available at \url{https://github.com/YBZh/OpenOOD-VLM}. | AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models | [
"Yabin Zhang",
"Lei Zhang"
] | NeurIPS.cc/2024/Conference | 2410.20149 | [
"https://github.com/ybzh/openood-vlm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vP9qAzr2Gw | @inproceedings{
karmim2024supralaplacian,
title={Supra-Laplacian Encoding for Transformer on Dynamic Graphs},
author={Yannis Karmim and Marc Lafon and Raphael Fournier-S'niehotta and Nicolas THOME},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vP9qAzr2Gw}
} | Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching.
However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention,GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information.
Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix.
Our second contribution explicitly model nodes' pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction.
SLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g, LSTM), and Dynamic Graph Transformers,
on~9 datasets. Code is open-source and available at this link https://github.com/ykrmm/SLATE. | Supra-Laplacian Encoding for Transformer on Dynamic Graphs | [
"Yannis Karmim",
"Marc Lafon",
"Raphael Fournier-S'niehotta",
"Nicolas THOME"
] | NeurIPS.cc/2024/Conference | 2409.17986 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vMMzjCr5Zj | @inproceedings{
kamarthi2024large,
title={Large Pre-trained time series models for cross-domain Time series analysis tasks},
author={Harshavardhan Kamarthi and B. Aditya Prakash},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vMMzjCr5Zj}
} | Large pre-trained models have been vital in recent advancements in domains like language and vision, making model training for individual downstream tasks more efficient and provide superior performance. However, tackling time-series analysis tasks usually involves designing and training a separate model from scratch leveraging training data and domain expertise specific to the task. We tackle a significant challenge for pre-training a foundational time-series model from multi-domain time-series datasets: extracting semantically useful tokenized inputs to the model across heterogeneous time-series from different domains. We propose Large Pre-trained Time-series Models (LPTM) that introduces a novel method of adaptive segmentation that automatically identifies optimal dataset-specific segmentation strategy during pre-training. This enables LPTM to perform similar to or better than domain-specific state-of-art model when fine-tuned to different downstream time-series analysis tasks and under zero-shot settings. LPTM achieves superior forecasting and time-series classification results taking up to 40% less data and 50% less training time compared to state-of-art baselines. | Large Pre-trained time series models for cross-domain Time series analysis tasks | [
"Harshavardhan Kamarthi",
"B. Aditya Prakash"
] | NeurIPS.cc/2024/Conference | 2311.11413 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vJSNsSFO95 | @inproceedings{
li2024flaws,
title={Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in {SAM}},
author={Chenxin Li and Yuzhihuang and Wuyang Li and Hengyu Liu and Xinyu Liu and Qing Xu and Zhen Chen and Yue Huang and Yixuan Yuan},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vJSNsSFO95}
} | As the vision foundation models like the Segment Anything Model (SAM) demonstrate potent universality, they also present challenges in giving ambiguous and uncertain predictions. Significant variations in the model output and granularity can occur with simply subtle changes in the prompt, contradicting the consensus requirement for the robustness of a model. While some established works have been dedicated to stabilizing and fortifying the prediction of SAM, this paper takes a unique path to explore how this flaw can be inverted into an advantage when modeling inherently ambiguous data distributions. We introduce an optimization framework based on a conditional variational autoencoder, which jointly models the prompt and the granularity of the object with a latent probability distribution. This approach enables the model to adaptively perceive and represent the real ambiguous label distribution, taming SAM to produce a series of diverse, convincing, and reasonable segmentation outputs controllably. Extensive experiments on several practical deployment scenarios involving ambiguity demonstrates the exceptional performance of our framework. Project page: \url{https://a-sa-m.github.io/}. | Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM | [
"Chenxin Li",
"Yuzhihuang",
"Wuyang Li",
"Hengyu Liu",
"Xinyu Liu",
"Qing Xu",
"Zhen Chen",
"Yue Huang",
"Yixuan Yuan"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vJMMdFfL0A | @inproceedings{
liu2024the,
title={The Benefits of Balance: From Information Projections to Variance Reduction},
author={Lang Liu and Ronak Mehta and Soumik Pal and Zaid Harchaoui},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vJMMdFfL0A}
} | Data balancing across multiple modalities and sources appears in various forms in foundation models in machine learning and AI, e.g. in CLIP and DINO. We show that data balancing across modalities and sources actually offers an unsuspected benefit: variance reduction. We present a non-asymptotic statistical bound that quantifies this variance reduction effect and relates it to the eigenvalue decay of Markov operators. Furthermore, we describe how various forms of data balancing in contrastive multimodal learning and self-supervised clustering can be better understood, and even improved upon, owing to our variance reduction viewpoint. | The Benefits of Balance: From Information Projections to Variance Reduction | [
"Lang Liu",
"Ronak Mehta",
"Soumik Pal",
"Zaid Harchaoui"
] | NeurIPS.cc/2024/Conference | 2408.15065 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vJLTcCBZVT | @inproceedings{
jain2024improving,
title={Improving Subgroup Robustness via Data Selection},
author={Saachi Jain and Kimia Hamidieh and Kristian Georgiev and Andrew Ilyas and Marzyeh Ghassemi and Aleksander Madry},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vJLTcCBZVT}
} | Machine learning models can often fail on subgroups that are underrepresented
during training. While dataset balancing can improve performance on
underperforming groups, it requires access to training group annotations and can
end up removing large portions of the dataset. In this paper, we introduce
Data Debiasing with Datamodels (D3M), a debiasing approach
which isolates and removes specific training examples that drive the model's
failures on minority groups. Our approach enables us to efficiently train
debiased classifiers while removing only a small number of examples, and does
not require training group annotations or additional hyperparameter tuning. | Improving Subgroup Robustness via Data Selection | [
"Saachi Jain",
"Kimia Hamidieh",
"Kristian Georgiev",
"Andrew Ilyas",
"Marzyeh Ghassemi",
"Aleksander Madry"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vIP8IWmZlN | @inproceedings{
lipinski2024speaking,
title={Speaking Your Language: Spatial Relationships in Interpretable Emergent Communication},
author={Olaf Lipinski and Adam Sobey and Federico Cerutti and Timothy J. Norman},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vIP8IWmZlN}
} | Effective communication requires the ability to refer to specific parts of an observation in relation to others. While emergent communication literature shows success in developing various language properties, no research has shown the emergence of such positional references. This paper demonstrates how agents can communicate about spatial relationships within their observations. The results indicate that agents can develop a language capable of expressing the relationships between parts of their observation, achieving over 90% accuracy when trained in a referential game which requires such communication. Using a collocation measure, we demonstrate how the agents create such references. This analysis suggests that agents use a mixture of non-compositional and compositional messages to convey spatial relationships. We also show that the emergent language is interpretable by humans. The translation accuracy is tested by communicating with the receiver agent, where the receiver achieves over 78% accuracy using parts of this lexicon, confirming that the interpretation of the emergent language was successful. | Speaking Your Language: Spatial Relationships in Interpretable Emergent Communication | [
"Olaf Lipinski",
"Adam Sobey",
"Federico Cerutti",
"Timothy J. Norman"
] | NeurIPS.cc/2024/Conference | 2406.07277 | [
"https://github.com/olipinski/tpg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vIOKLMl6wu | @inproceedings{
zhao2024lova,
title={{LOVA}3: Learning to Visual Question Answering, Asking and Assessment},
author={Hengyuan Zhao and Pan Zhou and Difei Gao and Zechen Bai and Mike Zheng Shou},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vIOKLMl6wu}
} | Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. However, current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. In this study, we introduce LOVA3, an innovative framework named ``Learning tO Visual Question Answering, Asking and Assessment,'' designed to equip MLLMs with these additional capabilities. Our approach involves the creation of two supplementary training tasks GenQA and EvalQA, aiming at fostering the skills of asking and assessing questions in the context of images. To develop the questioning ability, we compile a comprehensive set of multimodal foundational tasks. For assessment, we introduce a new benchmark called EvalQABench, comprising 64,000 training samples (split evenly between positive and negative samples) and 5,000 testing samples. We posit that enhancing MLLMs with the capabilities to answer, ask, and assess questions
will enhance their multimodal comprehension, ultimately improving overall performance. To validate this hypothesis, we train MLLMs using the LOVA3 framework and evaluate them on a range of multimodal datasets and benchmarks. Our results demonstrate consistent performance gains, underscoring the critical role of these additional tasks in fostering comprehensive intelligence in MLLMs. | LOVA3: Learning to Visual Question Answering, Asking and Assessment | [
"Hengyuan Zhao",
"Pan Zhou",
"Difei Gao",
"Zechen Bai",
"Mike Zheng Shou"
] | NeurIPS.cc/2024/Conference | 2405.14974 | [
"https://github.com/showlab/lova3"
] | https://huggingface.co/papers/2405.14974 | 1 | 1 | 0 | 4 | [
"hhenryz/LOVA3-llava-v1.5-7b"
] | [] | [] | [
"hhenryz/LOVA3-llava-v1.5-7b"
] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=vI1WqFn15v | @inproceedings{
hu2024gradient,
title={Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes},
author={Xiaomeng Hu and Pin-Yu Chen and Tsung-Yi Ho},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vI1WqFn15v}
} | Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a query and the LLM generates an answer. To reduce harm and misuse, efforts have been made to align these LLMs to human values using advanced training techniques such as Reinforcement Learning from Human Feedback (RLHF). However, recent studies have highlighted the vulnerability of LLMs to adversarial jailbreak attempts aiming at subverting the embedded safety guardrails. To address this challenge, this paper defines and investigates the **Refusal Loss** of LLMs and then proposes a method called **Gradient Cuff** to detect jailbreak attempts. Gradient Cuff exploits the unique properties observed in the refusal loss landscape, including functional values and its smoothness, to design an effective two-step detection strategy. Experimental results on two aligned LLMs (LLaMA-2-7B-Chat and Vicuna-7B-V1.5) and six types of jailbreak attacks (GCG, AutoDAN, PAIR, TAP, Base64, and LRL) show that Gradient Cuff can significantly improve the LLM's rejection capability for malicious jailbreak queries, while maintaining the model's performance for benign user queries by adjusting the detection threshold. | Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes | [
"Xiaomeng Hu",
"Pin-Yu Chen",
"Tsung-Yi Ho"
] | NeurIPS.cc/2024/Conference | 2403.00867 | [
""
] | https://huggingface.co/papers/2403.00867 | 0 | 0 | 0 | 3 | [] | [] | [
"TrustSafeAI/GradientCuff-Jailbreak-Defense"
] | [] | [] | [
"TrustSafeAI/GradientCuff-Jailbreak-Defense"
] | 1 | poster |
null | https://openreview.net/forum?id=vH7GcaDhAo | @inproceedings{
zeng2024rsa,
title={{RSA}: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions},
author={Ziyao Zeng and Yangchao Wu and Hyoungseob Park and Daniel Wang and Fengyu Yang and Stefano Soatto and Dong Lao and Byung-Woo Hong and Alex Wong},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vH7GcaDhAo}
} | We propose a method for metric-scale monocular depth estimation. Inferring depth from a single image is an ill-posed problem due to the loss of scale from perspective projection during the image formation process. Any scale chosen is a bias, typically stemming from training on a dataset; hence, existing works have instead opted to use relative (normalized, inverse) depth. Our goal is to recover metric-scaled depth maps through a linear transformation. The crux of our method lies in the observation that certain objects (e.g., cars, trees, street signs) are typically found or associated with certain types of scenes (e.g., outdoor). We explore whether language descriptions can be used to transform relative depth predictions to those in metric scale. Our method, RSA , takes as input a text caption describing objects present in an image and outputs the parameters of a linear transformation which can be applied globally to a relative depth map to yield metric-scaled depth predictions. We demonstrate our method on recent general-purpose monocular depth models on indoors (NYUv2, VOID) and outdoors (KITTI). When trained on multiple datasets, RSA can serve as a general alignment module in zero-shot settings. Our method improves over common practices in aligning relative to metric depth and results in predictions that are comparable to an upper bound of fitting relative depth to ground truth via a linear transformation. Code is available at: https://github.com/Adonis-galaxy/RSA. | RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions | [
"Ziyao Zeng",
"Yangchao Wu",
"Hyoungseob Park",
"Daniel Wang",
"Fengyu Yang",
"Stefano Soatto",
"Dong Lao",
"Byung-Woo Hong",
"Alex Wong"
] | NeurIPS.cc/2024/Conference | 2410.02924 | [
"https://github.com/adonis-galaxy/rsa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vDlj3veE9a | @inproceedings{
dexter2024the,
title={The Space Complexity of Approximating Logistic Loss},
author={Gregory Dexter and Petros Drineas and Rajiv Khanna},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vDlj3veE9a}
} | We provide space complexity lower bounds for data structures that approximate logistic loss up to $\epsilon$-relative error on a logistic regression problem with data $\mathbf{X} \in \mathbb{R}^{n \times d}$ and labels $\mathbf{y} \in \\{-1,1\\}^d$. The space complexity of existing coreset constructions depend on a natural complexity measure $\mu_\mathbf{y}(\mathbf{X})$. We give an $\tilde{\Omega}(\frac{d}{\epsilon^2})$ space complexity lower bound in the regime $\mu_\mathbf{y}(\mathbf{X}) = \mathcal{O}(1)$ that shows existing coresets are optimal in this regime up to lower order factors. We also prove a general $\tilde{\Omega}(d\cdot \mu_\mathbf{y}(\mathbf{X}))$ space lower bound when $\epsilon$ is constant, showing that the dependency on $\mu_\mathbf{y}(\mathbf{X})$ is not an artifact of mergeable coresets. Finally, we refute a prior conjecture that $\mu_\mathbf{y}(\mathbf{X})$ is hard to compute by providing an efficient linear programming formulation, and we empirically compare our algorithm to prior approximate methods. | The Space Complexity of Approximating Logistic Loss | [
"Gregory Dexter",
"Petros Drineas",
"Rajiv Khanna"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vCOgjBIZuL | @inproceedings{
wu2024directd,
title={Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer},
author={Shuang Wu and Youtian Lin and Yifei Zeng and Feihu Zhang and Jingxi Xu and Philip Torr and Xun Cao and Yao Yao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vCOgjBIZuL}
} | Generating high-quality 3D assets from text and images has long been challenging, primarily due to the absence of scalable 3D representations capable of capturing intricate geometry distributions. In this work, we introduce Direct3D, a native 3D generative model scalable to in-the-wild input images, without requiring a multi-view diffusion model or SDS optimization. Our approach comprises two primary components: a Direct 3D Variational Auto-Encoder (D3D-VAE) and a Direct 3D Diffusion Transformer (D3D-DiT). D3D-VAE efficiently encodes high-resolution 3D shapes into a compact and continuous latent triplane space. Notably, our method directly supervises the decoded geometry using a semi-continuous surface sampling strategy, diverging from previous methods relying on rendered images as supervision signals. D3D-DiT models the distribution of encoded 3D latents and is specifically designed to fuse positional information from the three feature maps of the triplane latent, enabling a native 3D generative model scalable to large-scale 3D datasets. Additionally, we introduce an innovative image-to-3D generation pipeline incorporating semantic and pixel-level image conditions, allowing the model to produce 3D shapes consistent with the provided conditional image input. Extensive experiments demonstrate the superiority of our large-scale pre-trained Direct3D over previous image-to-3D approaches, achieving significantly better generation quality and generalization ability, thus establishing a new state-of-the-art for 3D content creation. Project page: https://www.neural4d.com/research/direct3d. | Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer | [
"Shuang Wu",
"Youtian Lin",
"Yifei Zeng",
"Feihu Zhang",
"Jingxi Xu",
"Philip Torr",
"Xun Cao",
"Yao Yao"
] | NeurIPS.cc/2024/Conference | 2405.14832 | [
""
] | https://huggingface.co/papers/2405.14832 | 1 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=vCIc9BXzze | @inproceedings{
zhou2024unveiling,
title={Unveiling the Bias Impact on Symmetric Moral Consistency of Large Language Models},
author={Ziyi Zhou and Xinwei Guo and Jiashi Gao and Xiangyu Zhao and Shiyao Zhang and Xin Yao and Xuetao Wei},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vCIc9BXzze}
} | Large Language Models (LLMs) have demonstrated remarkable capabilities, surpassing human experts in various benchmark tests and playing a vital role in various industry sectors. Despite their effectiveness, a notable drawback of LLMs is their inconsistent moral behavior, which raises ethical concerns. This work delves into symmetric moral consistency in large language models and demonstrates that modern LLMs lack sufficient consistency ability in moral scenarios. Our extensive investigation of twelve popular LLMs reveals that their assessed consistency scores are influenced by position bias and selection bias rather than their intrinsic abilities. We propose a new framework tSMC, which gauges the effects of these biases and effectively mitigates the bias impact based on the Kullback–Leibler divergence to pinpoint LLMs' mitigated Symmetric Moral Consistency. We find that the ability of LLMs to maintain consistency varies across different moral scenarios. Specifically, LLMs show more consistency in scenarios with clear moral answers compared to those where no choice is morally perfect. The average consistency score of 12 LLMs ranges from $60.7\%$ in high-ambiguity moral scenarios to $84.8\%$ in low-ambiguity moral scenarios. | Unveiling the Bias Impact on Symmetric Moral Consistency of Large Language Models | [
"Ziyi Zhou",
"Xinwei Guo",
"Jiashi Gao",
"Xiangyu Zhao",
"Shiyao Zhang",
"Xin Yao",
"Xuetao Wei"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vBxeeH1X4y | @inproceedings{
sun2024doob,
title={2D-{OOB}: Attributing Data Contribution Through Joint Valuation Framework},
author={Yifan Sun and Jingyan Shen and Yongchan Kwon},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vBxeeH1X4y}
} | Data valuation has emerged as a powerful framework for quantifying each datum's contribution to the training of a machine learning model. However, it is crucial to recognize that the quality of cells within a single data point can vary greatly in practice. For example, even in the case of an abnormal data point, not all cells are necessarily noisy. The single scalar score assigned by existing data valuation methods blurs the distinction between noisy and clean cells of a data point, making it challenging to interpret the data values. In this paper, we propose 2D-OOB, an out-of-bag estimation framework for jointly determining helpful (or detrimental) samples as well as the particular cells that drive them. Our comprehensive experiments demonstrate that 2D-OOB achieves state-of-the-art performance across multiple use cases while being exponentially faster. Specifically, 2D-OOB shows promising results in detecting and rectifying fine-grained outliers at the cell level, and localizing backdoor triggers in data poisoning attacks. | 2D-OOB: Attributing Data Contribution Through Joint Valuation Framework | [
"Yifan Sun",
"Jingyan Shen",
"Yongchan Kwon"
] | NeurIPS.cc/2024/Conference | 2408.03572 | [
"https://github.com/YifanSun99/2d-oob"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vBlzen37i0 | @inproceedings{
adcock2024optimal,
title={Optimal deep learning of holomorphic operators between Banach spaces},
author={Ben Adcock and Nick Dexter and Sebastian Moraga},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vBlzen37i0}
} | Operator learning problems arise in many key areas of scientific computing where Partial Differential Equations (PDEs) are used to model physical systems. In such scenarios, the operators map between Banach or Hilbert spaces. In this work, we tackle the problem of learning operators between Banach spaces, in contrast to the vast majority of past works considering only Hilbert spaces. We focus on learning holomorphic operators -- an important class of problems with many applications. We combine arbitrary approximate encoders and decoders with standard feedforward Deep Neural Network (DNN) architectures -- specifically, those with constant width exceeding the depth -- under standard $\ell^2$-loss minimization. We first identify a family of DNNs such that the resulting Deep Learning (DL) procedure achieves optimal generalization bounds for such operators. For standard fully-connected architectures, we then show that there are uncountably many minimizers of the training problem that yield equivalent optimal performance. The DNN architectures we consider are `problem agnostic', with width and depth only depending on the amount of training data $m$ and not on regularity assumptions of the target operator. Next, we show that DL is optimal for this problem: no recovery procedure can surpass these generalization bounds up to log terms. Finally, we present numerical results demonstrating the practical performance on challenging problems including the parametric diffusion, Navier-Stokes-Brinkman and Boussinesq PDEs. | Optimal deep learning of holomorphic operators between Banach spaces | [
"Ben Adcock",
"Nick Dexter",
"Sebastian Moraga"
] | NeurIPS.cc/2024/Conference | 2406.13928 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=vBah12uVbD | @inproceedings{
javanmardi2024conformalized,
title={Conformalized Credal Set Predictors},
author={Alireza Javanmardi and David Stutz and Eyke H{\"u}llermeier},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vBah12uVbD}
} | Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution. In machine learning, they have recently attracted attention as an appealing formalism for uncertainty representation, in particular, due to their ability to represent both the aleatoric and epistemic uncertainty in a prediction. However, the design of methods for learning credal set predictors remains a challenging problem. In this paper, we make use of conformal prediction for this purpose. More specifically, we propose a method for predicting credal sets in the classification task, given training data labeled by probability distributions. Since our method inherits the coverage guarantees of conformal prediction, our conformal credal sets are guaranteed to be valid with high probability (without any assumptions on model or distribution). We demonstrate the applicability of our method on ambiguous classification tasks for uncertainty quantification. | Conformalized Credal Set Predictors | [
"Alireza Javanmardi",
"David Stutz",
"Eyke Hüllermeier"
] | NeurIPS.cc/2024/Conference | 2402.10723 | [
"https://github.com/alireza-javanmardi/conformal-credal-sets"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vBKoEZ1PG3 | @inproceedings{
tang2024hawk,
title={{HAWK}: Learning to Understand Open-World Video Anomalies},
author={Jiaqi Tang and Hao LU and RUIZHENG WU and Xiaogang Xu and Ke Ma and Cheng Fang and Bin Guo and Jiangbo Lu and Qifeng Chen and Ying-Cong Chen},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vBKoEZ1PG3}
} | Video Anomaly Detection (VAD) systems can autonomously monitor and identify disturbances, reducing the need for manual labor and associated costs. However, current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction. Additionally, the prevalent data scarcity in existing datasets restricts their applicability in open-world scenarios.
In this paper, we introduce HAWK, a novel framework that leverages interactive large Visual Language Models (VLM) to interpret video anomalies precisely. Recognizing the difference in motion information between abnormal and normal videos, HAWK explicitly integrates motion modality to enhance anomaly identification. To reinforce motion attention, we construct an auxiliary consistency loss within the motion and video space, guiding the video branch to focus on the motion modality. Moreover, to improve the interpretation of motion-to-language, we establish a clear supervisory relationship between motion and its linguistic representation. Furthermore, we have annotated over 8,000 anomaly videos with language descriptions, enabling effective training across diverse open-world scenarios, and also created 8,000 question-answering pairs for users' open-world questions. The final results demonstrate that HAWK achieves SOTA performance, surpassing existing baselines in both video description generation and question-answering. Our codes/dataset/demo will be released at https://github.com/jqtangust/hawk. | HAWK: Learning to Understand Open-World Video Anomalies | [
"Jiaqi Tang",
"Hao LU",
"RUIZHENG WU",
"Xiaogang Xu",
"Ke Ma",
"Cheng Fang",
"Bin Guo",
"Jiangbo Lu",
"Qifeng Chen",
"Ying-Cong Chen"
] | NeurIPS.cc/2024/Conference | 2405.16886 | [
"https://github.com/jqtangust/hawk"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vBGMbFgvsX | @inproceedings{
lee2024going,
title={Going Beyond Heuristics by Imposing Policy Improvement as a Constraint},
author={Chi-Chang Lee and Zhang-Wei Hong and Pulkit Agrawal},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vBGMbFgvsX}
} | In many reinforcement learning (RL) applications, incorporating heuristic rewards alongside the task reward is crucial for achieving desirable performance. Heuristics encode prior human knowledge about how a task should be done, providing valuable hints for RL algorithms. However, such hints may not be optimal, limiting the performance of learned policies.
The currently established way of using heuristics is to modify the heuristic reward in a manner that ensures that the optimal policy learned with it remains the same as the optimal policy for the task reward (i.e., optimal policy invariance).
However, these methods often fail in practical scenarios with limited training data. We found that while optimal policy invariance ensures convergence to the best policy based on task rewards, it doesn't guarantee better performance than policies trained with biased heuristics under a finite data regime, which is impractical. In this paper, we introduce a new principle tailored for finite data settings. Instead of enforcing optimal policy invariance, we train a policy that combines task and heuristic rewards and ensures it outperforms the heuristic-trained policy. As such, we prevent policies from merely exploiting heuristic rewards without improving the task reward. Our experiments on robotic locomotion, helicopter control, and manipulation tasks demonstrate that our method consistently outperforms the heuristic policy, regardless of the heuristic rewards' quality.
Code is available at https://github.com/Improbable-AI/hepo. | Going Beyond Heuristics by Imposing Policy Improvement as a Constraint | [
"Chi-Chang Lee",
"Zhang-Wei Hong",
"Pulkit Agrawal"
] | NeurIPS.cc/2024/Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=vAOgaPvgYr | @inproceedings{
dugan2024occamllm,
title={Occam{LLM}: Fast and Exact Language Model Arithmetic in a Single Step},
author={Owen M Dugan and Donato M. Jim{\'e}nez Benet{\'o} and Charlotte Loh and Zhuo Chen and Rumen Dangovski and Marin Soljacic},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vAOgaPvgYr}
} | Despite significant advancements in text generation and reasoning, Large Language Models (LLMs) still face challenges in accurately performing complex arithmetic operations. Language model systems often enable LLMs to generate code for arithmetic operations to achieve accurate calculations. However, this approach compromises speed and security, and fine-tuning risks the language model losing prior capabilities. We propose a framework that enables exact arithmetic in *a single autoregressive step*, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities. We use the hidden states of a LLM to control a symbolic architecture that performs arithmetic. Our implementation using Llama 3 with OccamNet as a symbolic model (OccamLlama) achieves 100\% accuracy on single arithmetic operations ($+,-,\times,\div,\sin{},\cos{},\log{},\exp{},\sqrt{}$), outperforming GPT 4o with and without a code interpreter. Furthermore, OccamLlama outperforms GPT 4o with and without a code interpreter on average across a range of mathematical problem solving benchmarks, demonstrating that OccamLLMs can excel in arithmetic tasks, even surpassing much larger models. Code is available at https://github.com/druidowm/OccamLLM. | OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step | [
"Owen M Dugan",
"Donato M. Jiménez Benetó",
"Charlotte Loh",
"Zhuo Chen",
"Rumen Dangovski",
"Marin Soljacic"
] | NeurIPS.cc/2024/Conference | 2406.06576 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=vA4s3kN4QE | @inproceedings{
guotao2024lgvq,
title={{LG}-{VQ}: Language-Guided Codebook Learning},
author={Liang Guotao and Baoquan Zhang and Yaowei Wang and Yunming Ye and Xutao Li and Wanghuaibin and Luo Chuyao and kolaye and luolinfeng},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=vA4s3kN4QE}
} | Vector quantization (VQ) is a key technique in high-resolution and high-fidelity image synthesis, which aims to learn a codebook to encode an image with a sequence of discrete codes and then generate an image in an auto-regression manner.
Although existing methods have shown superior performance, most methods prefer to learn a single-modal codebook (\emph{e.g.}, image), resulting in suboptimal performance when the codebook is applied to multi-modal downstream tasks (\emph{e.g.}, text-to-image, image captioning) due to the existence of modal gaps.
In this paper, we propose a novel language-guided codebook learning framework, called LG-VQ, which aims to learn a codebook that can be aligned with the text to improve the performance of multi-modal downstream tasks. Specifically, we first introduce pre-trained text semantics as prior knowledge, then design two novel alignment modules (\emph{i.e.}, Semantic Alignment Module, and Relationship Alignment Module) to transfer such prior knowledge into codes for achieving codebook text alignment.
In particular, our LG-VQ method is model-agnostic, which can be easily integrated into existing VQ models. Experimental results show that our method achieves superior performance on reconstruction and various multi-modal downstream tasks. | LG-VQ: Language-Guided Codebook Learning | [
"Liang Guotao",
"Baoquan Zhang",
"Yaowei Wang",
"Yunming Ye",
"Xutao Li",
"Wanghuaibin",
"Luo Chuyao",
"kolaye",
"luolinfeng"
] | NeurIPS.cc/2024/Conference | 2405.14206 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |