bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=sW8yGZ4uVJ
@inproceedings{ mei2023orderingbased, title={Ordering-based Conditions for Global Convergence of Policy Gradient Methods}, author={Jincheng Mei and Bo Dai and Alekh Agarwal and Mohammad Ghavamzadeh and Csaba Szepesvari and Dale Schuurmans}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sW8yGZ4uVJ} }
We prove that, for finite-arm bandits with linear function approximation, the global convergence of policy gradient (PG) methods depends on inter-related properties between the policy update and the representation. textcolor{blue}{First}, we establish a few key observations that frame the study: \textbf{(i)} Global convergence can be achieved under linear function approximation without policy or reward realizability, both for the standard Softmax PG and natural policy gradient (NPG). \textbf{(ii)} Approximation error is not a key quantity for characterizing global convergence in either algorithm. \textbf{(iii)} The conditions on the representation that imply global convergence are different between these two algorithms. Overall, these observations call into question approximation error as an appropriate quantity for characterizing the global convergence of PG methods under linear function approximation. \textcolor{blue}{Second}, motivated by these observations, we establish new general results: \textbf{(i)} NPG with linear function approximation achieves global convergence \emph{if and only if} the projection of the reward onto the representable space preserves the optimal action's rank, a quantity that is not strongly related to approximation error. \textbf{(ii)} The global convergence of Softmax PG occurs if the representation satisfies a non-domination condition and can preserve the ranking of rewards, which goes well beyond policy or reward realizability. We provide experimental results to support these theoretical findings.
Ordering-based Conditions for Global Convergence of Policy Gradient Methods
[ "Jincheng Mei", "Bo Dai", "Alekh Agarwal", "Mohammad Ghavamzadeh", "Csaba Szepesvari", "Dale Schuurmans" ]
Conference
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sUqG96QqZM
@inproceedings{ mo2023weaklysupervised, title={Weakly-Supervised Audio-Visual Segmentation}, author={Shentong Mo and Bhiksha Raj}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sUqG96QqZM} }
Audio-visual segmentation is a challenging task that aims to predict pixel-level masks for sound sources in a video. Previous work applied a comprehensive manually designed architecture with countless pixel-wise accurate masks as supervision. However, these pixel-level masks are expensive and not available in all cases. In this work, we aim to simplify the supervision as the instance-level annotation, $\textit{i.e.}$, weakly-supervised audio-visual segmentation. We present a novel Weakly-Supervised Audio-Visual Segmentation framework, namely WS-AVS, that can learn multi-scale audio-visual alignment with multi-scale multiple-instance contrastive learning for audio-visual segmentation. Extensive experiments on AVSBench demonstrate the effectiveness of our WS-AVS in the weakly-supervised audio-visual segmentation of single-source and multi-source scenarios.
Weakly-Supervised Audio-Visual Segmentation
[ "Shentong Mo", "Bhiksha Raj" ]
Conference
poster
2311.15080
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sUFGPYS25Q
@inproceedings{ liu2023dseparation, title={D-Separation for Causal Self-Explanation}, author={Wei Liu and Jun Wang and Haozhao Wang and Ruixuan Li and Zhiying Deng and YuanKai Zhang and Yang Qiu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sUFGPYS25Q} }
Rationalization aims to strengthen the interpretability of NLP models by extracting a subset of human-intelligible pieces of their inputting texts. Conventional works generally employ the maximum mutual information (MMI) criterion to find the rationale that is most indicative of the target label. However, this criterion can be influenced by spurious features that correlate with the causal rationale or the target label. Instead of attempting to rectify the issues of the MMI criterion, we propose a novel criterion to uncover the causal rationale, termed the Minimum Conditional Dependence (MCD) criterion, which is grounded on our finding that the non-causal features and the target label are \emph{d-separated} by the causal rationale. By minimizing the dependence between the non-selected parts of the input and the target label conditioned on the selected rationale candidate, all the causes of the label are compelled to be selected. In this study, we employ a simple and practical measure for dependence, specifically the KL-divergence, to validate our proposed MCD criterion. Empirically, we demonstrate that MCD improves the F1 score by up to 13.7% compared to previous state-of-the-art MMI-based methods. Our code is in an anonymous repository: https://anonymous.4open.science/r/MCD-CE88.
D-Separation for Causal Self-Explanation
[ "Wei Liu", "Jun Wang", "Haozhao Wang", "Ruixuan Li", "Zhiying Deng", "YuanKai Zhang", "Yang Qiu" ]
Conference
poster
2309.13391
[ "https://github.com/jugechengzi/rationalization-mcd" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sTjW3JHs2V
@inproceedings{ zhang2023let, title={Let the Flows Tell: Solving Graph Combinatorial Problems with {GF}lowNets}, author={Dinghuai Zhang and Hanjun Dai and Nikolay Malkin and Aaron Courville and Yoshua Bengio and Ling Pan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sTjW3JHs2V} }
Combinatorial optimization (CO) problems are often NP-hard and thus out of reach for exact algorithms, making them a tempting domain to apply machine learning methods. The highly structured constraints in these problems can hinder either optimization or sampling directly in the solution space. On the other hand, GFlowNets have recently emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially and have the potential to amortize such solution-searching processes in CO, as well as generate diverse solution candidates. In this paper, we design Markov decision processes (MDPs) for different combinatorial problems and propose to train conditional GFlowNets to sample from the solution space. Efficient training techniques are also developed to benefit long-range credit assignment. Through extensive experiments on a variety of different CO tasks with synthetic and realistic data, we demonstrate that GFlowNet policies can efficiently find high-quality solutions. Our implementation is open-sourced at https://github.com/zdhNarsil/GFlowNet-CombOpt.
Let the Flows Tell: Solving Graph Combinatorial Problems with GFlowNets
[ "Dinghuai Zhang", "Hanjun Dai", "Nikolay Malkin", "Aaron Courville", "Yoshua Bengio", "Ling Pan" ]
Conference
spotlight
[ "https://github.com/zdhnarsil/gflownet-combopt" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sQyRQjun46
@inproceedings{ zang2023understanding, title={Understanding and Addressing the Pitfalls of Bisimulation-based Representations in Offline Reinforcement Learning}, author={Hongyu Zang and Xin Li and Leiji Zhang and Yang Liu and Baigui Sun and Riashat Islam and Remi Tachet des Combes and Romain Laroche}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sQyRQjun46} }
While bisimulation-based approaches hold promise for learning robust state representations for Reinforcement Learning (RL) tasks, their efficacy in offline RL tasks has not been up to par. In some instances, their performance has even significantly underperformed alternative methods. We aim to understand why bisimulation methods succeed in online settings, but falter in offline tasks. Our analysis reveals that missing transitions in the dataset are particularly harmful to the bisimulation principle, leading to ineffective estimation. We also shed light on the critical role of reward scaling in bounding the scale of bisimulation measurements and of the value error they induce. Based on these findings, we propose to apply the expectile operator for representation learning to our offline RL setting, which helps to prevent overfitting to incomplete data. Meanwhile, by introducing an appropriate reward scaling strategy, we avoid the risk of feature collapse in representation space. We implement these recommendations on two state-of-the-art bisimulation-based algorithms, MICo and SimSR, and demonstrate performance gains on two benchmark suites: D4RL and Visual D4RL. Codes are provided at \url{https://github.com/zanghyu/Offline_Bisimulation}.
Understanding and Addressing the Pitfalls of Bisimulation-based Representations in Offline Reinforcement Learning
[ "Hongyu Zang", "Xin Li", "Leiji Zhang", "Yang Liu", "Baigui Sun", "Riashat Islam", "Remi Tachet des Combes", "Romain Laroche" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sQBGVw5qH9
@inproceedings{ hu2023cocktail, title={Cocktail: Mixing Multi-Modality Control for Text-Conditional Image Generation}, author={Minghui Hu and Jianbin Zheng and Daqing Liu and Chuanxia Zheng and Chaoyue Wang and Dacheng Tao and Tat-Jen Cham}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sQBGVw5qH9} }
Text-conditional diffusion models are able to generate high-fidelity images with diverse contents. However, linguistic representations frequently exhibit ambiguous descriptions of the envisioned objective imagery, requiring the incorporation of additional control signals to bolster the efficacy of text-guided diffusion models. In this work, we propose Cocktail, a pipeline to mix various modalities into one embedding, amalgamated with a generalized ControlNet (gControlNet), a controllable normalisation (ControlNorm), and a spatial guidance sampling method, to actualize multi-modal and spatially-refined control for text-conditional diffusion models. Specifically, we introduce a hyper-network gControlNet, dedicated to the alignment and infusion of the control signals from disparate modalities into the pre-trained diffusion model. gControlNet is capable of accepting flexible modality signals, encompassing the simultaneous reception of any combination of modality signals, or the supplementary fusion of multiple modality signals. The control signals are then fused and injected into the backbone model according to our proposed ControlNorm. Furthermore, our advanced spatial guidance sampling methodology proficiently incorporates the control signal into the designated region, thereby circumventing the manifestation of undesired objects within the generated image. We demonstrate the results of our method in controlling various modalities, proving high-quality synthesis and fidelity to multiple external signals.
Cocktail: Mixing Multi-Modality Control for Text-Conditional Image Generation
[ "Minghui Hu", "Jianbin Zheng", "Daqing Liu", "Chuanxia Zheng", "Chaoyue Wang", "Dacheng Tao", "Tat-Jen Cham" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sPLTQSf6GI
@inproceedings{ park2023a, title={A Measure-Theoretic Axiomatisation of Causality}, author={Junhyung Park and Simon Buchholz and Bernhard Sch{\"o}lkopf and Krikamol Muandet}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sPLTQSf6GI} }
Causality is a central concept in a wide range of research areas, yet there is still no universally agreed axiomatisation of causality. We view causality both as an extension of probability theory and as a study of what happens when one intervenes on a system, and argue in favour of taking Kolmogorov's measure-theoretic axiomatisation of probability as the starting point towards an axiomatisation of causality. To that end, we propose the notion of a causal space, consisting of a probability space along with a collection of transition probability kernels, called causal kernels, that encode the causal information of the space. Our proposed framework is not only rigorously grounded in measure theory, but it also sheds light on long-standing limitations of existing frameworks including, for example, cycles, latent variables and stochastic processes.
A Measure-Theoretic Axiomatisation of Causality
[ "Junhyung Park", "Simon Buchholz", "Bernhard Schölkopf", "Krikamol Muandet" ]
Conference
oral
2305.17139
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sOQBHlCmzp
@inproceedings{ wang2023contrast, title={Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-Series}, author={Yihe Wang and Yu Han and Haishuai Wang and Xiang Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sOQBHlCmzp} }
Contrastive representation learning is crucial in medical time series analysis as it alleviates dependency on labor-intensive, domain-specific, and scarce expert annotations. However, existing contrastive learning methods primarily focus on one single data level, which fails to fully exploit the intricate nature of medical time series. To address this issue, we present COMET, an innovative hierarchical framework that leverages data consistencies at all inherent levels in medical time series. Our meticulously designed model systematically captures data consistency from four potential levels: observation, sample, trial, and patient levels. By developing contrastive loss at multiple levels, we can learn effective representations that preserve comprehensive data consistency, maximizing information utilization in a self-supervised manner. We conduct experiments in the challenging patient-independent setting. We compare COMET against six baselines using three diverse datasets, which include ECG signals for myocardial infarction and EEG signals for Alzheimer’s and Parkinson’s diseases. The results demonstrate that COMET consistently outperforms all baselines, particularly in setup with 10% and 1% labeled data fractions across all datasets. These results underscore the significant impact of our framework in advancing contrastive representation learning techniques for medical time series. The source code is available at https://github.com/DL4mHealth/COMET.
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-Series
[ "Yihe Wang", "Yu Han", "Haishuai Wang", "Xiang Zhang" ]
Conference
poster
2310.14017
[ "https://github.com/dl4mhealth/comet" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sOOg1xJADA
@inproceedings{ gatmiry2023projectionfree, title={Projection-Free Online Convex Optimization via Efficient Newton Iterations}, author={Khashayar Gatmiry and Zakaria Mhammedi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sOOg1xJADA} }
This paper presents new projection-free algorithms for Online Convex Optimization (OCO) over a convex domain $\mathcal{K} \subset \mathbb{R}^d$. Classical OCO algorithms (such as Online Gradient Descent) typically need to perform Euclidean projections onto the convex set $\mathcal{K}$ to ensure feasibility of their iterates. Alternative algorithms, such as those based on the Frank-Wolfe method, swap potentially-expensive Euclidean projections onto $\mathcal{K}$ for linear optimization over $\mathcal{K}$. However, such algorithms have a sub-optimal regret in OCO compared to projection-based algorithms. In this paper, we look at a third type of algorithms that output approximate Newton iterates using a self-concordant barrier for the set of interest. The use of a self-concordant barrier automatically ensures feasibility without the need of projections. However, the computation of the Newton iterates requires a matrix inverse, which can still be expensive. As our main contribution, we show how the stability of the Newton iterates can be leveraged to only compute the inverse Hessian a vanishing fractions of the rounds, leading to a new efficient projection-free OCO algorithm with a state-of-the-art regret bound.
Projection-Free Online Convex Optimization via Efficient Newton Iterations
[ "Khashayar Gatmiry", "Zakaria Mhammedi" ]
Conference
poster
2306.11121
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sLr1sohnmo
@inproceedings{ lanthaler2023error, title={Error Bounds for Learning with Vector-Valued Random Features}, author={Samuel Lanthaler and Nicholas H. Nelsen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sLr1sohnmo} }
This paper provides a comprehensive error analysis of learning with vector-valued random features (RF). The theory is developed for RF ridge regression in a fully general infinite-dimensional input-output setting, but nonetheless applies to and improves existing finite-dimensional analyses. In contrast to comparable work in the literature, the approach proposed here relies on a direct analysis of the underlying risk functional and completely avoids the explicit RF ridge regression solution formula in terms of random matrices. This removes the need for concentration results in random matrix theory or their generalizations to random operators. The main results established in this paper include strong consistency of vector-valued RF estimators under model misspecification and minimax optimal convergence rates in the well-specified setting. The parameter complexity (number of random features) and sample complexity (number of labeled data) required to achieve such rates are comparable with Monte Carlo intuition and free from logarithmic factors.
Error Bounds for Learning with Vector-Valued Random Features
[ "Samuel Lanthaler", "Nicholas H. Nelsen" ]
Conference
spotlight
2305.17170
[ "https://github.com/nickhnelsen/error-bounds-for-vvRF" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sLhXMkI0kx
@inproceedings{ macdonald2023on, title={On skip connections and normalisation layers in deep optimisation}, author={Lachlan Ewen MacDonald and Jack Valmadre and Hemanth Saratchandran and Simon Lucey}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sLhXMkI0kx} }
We introduce a general theoretical framework, designed for the study of gradient optimisation of deep neural networks, that encompasses ubiquitous architecture choices including batch normalisation, weight normalisation and skip connections. Our framework determines the curvature and regularity properties of multilayer loss landscapes in terms of their constituent layers, thereby elucidating the roles played by normalisation layers and skip connections in globalising these properties. We then demonstrate the utility of this framework in two respects. First, we give the only proof of which we are aware that a class of deep neural networks can be trained using gradient descent to global optima even when such optima only exist at infinity, as is the case for the cross-entropy cost. Second, we identify a novel causal mechanism by which skip connections accelerate training, which we verify predictively with ResNets on MNIST, CIFAR10, CIFAR100 and ImageNet.
On skip connections and normalisation layers in deep optimisation
[ "Lachlan Ewen MacDonald", "Jack Valmadre", "Hemanth Saratchandran", "Simon Lucey" ]
Conference
poster
2210.05371
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sL4pJBXkxu
@inproceedings{ wang2023elden, title={{ELDEN}: Exploration via Local Dependencies}, author={Zizhao Wang and Jiaheng Hu and Peter Stone and Roberto Mart{\'\i}n-Mart{\'\i}n}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sL4pJBXkxu} }
Tasks with large state space and sparse rewards present a longstanding challenge to reinforcement learning. In these tasks, an agent needs to explore the state space efficiently until it finds a reward. To deal with this problem, the community has proposed to augment the reward function with intrinsic reward, a bonus signal that encourages the agent to visit interesting states. In this work, we propose a new way of defining interesting states for environments with factored state spaces and complex chained dependencies, where an agent's actions may change the value of one entity that, in order, may affect the value of another entity. Our insight is that, in these environments, interesting states for exploration are states where the agent is uncertain whether (as opposed to how) entities such as the agent or objects have some influence on each other. We present ELDEN, Exploration via Local DepENdencies, a novel intrinsic reward that encourages the discovery of new interactions between entities. ELDEN utilizes a novel scheme --- the partial derivative of the learned dynamics to model the local dependencies between entities accurately and computationally efficiently. The uncertainty of the predicted dependencies is then used as an intrinsic reward to encourage exploration toward new interactions. We evaluate the performance of ELDEN on four different domains with complex dependencies, ranging from 2D grid worlds to 3D robotic tasks. In all domains, ELDEN correctly identifies local dependencies and learns successful policies, significantly outperforming previous state-of-the-art exploration methods.
ELDEN: Exploration via Local Dependencies
[ "Zizhao Wang", "Jiaheng Hu", "Peter Stone", "Roberto Martín-Martín" ]
Conference
poster
2310.08702
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sJDkwMVqb9
@inproceedings{ luo2023crosslinks, title={Cross-links Matter for Link Prediction: Rethinking the Debiased {GNN} from a Data Perspective}, author={Zihan Luo and Hong Huang and Jianxun Lian and Xiran Song and Xing Xie and Hai Jin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sJDkwMVqb9} }
Recently, the bias-related issues in GNN-based link prediction have raised widely spread concerns. In this paper, we emphasize the bias on links across different node clusters, which we call cross-links, after considering its significance in both easing information cocoons and preserving graph connectivity. Instead of following the objective-oriented mechanism in prior works with compromised utility, we empirically find that existing GNN models face severe data bias between internal-links (links within the same cluster) and cross-links, and this inspires us to rethink the bias issue on cross-links from a data perspective. Specifically, we design a simple yet effective twin-structure framework, which can be easily applied to most of GNNs to mitigate the bias as well as boost their utility in an end-to-end manner. The basic idea is to generate debiased node embeddings as demonstrations, and fuse them into the embeddings of original GNNs. In particular, we learn debiased node embeddings with the help of augmented supervision signals, and a novel dynamic training strategy is designed to effectively fuse debiased node embeddings with the original node embeddings. Experiments on three datasets with six common GNNs show that our framework can not only alleviate the bias between internal-links and cross-links, but also boost the overall accuracy. Comparisons with other state-of-the-art methods also verify the superiority of our method.
Cross-links Matter for Link Prediction: Rethinking the Debiased GNN from a Data Perspective
[ "Zihan Luo", "Hong Huang", "Jianxun Lian", "Xiran Song", "Xing Xie", "Hai Jin" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sIU3WujeSl
@inproceedings{ guan2023voce, title={{VOCE}: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning}, author={Jiayi Guan and Guang Chen and Jiaming Ji and Long Yang and Ao Zhou and Zhijun Li and changjun jiang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sIU3WujeSl} }
Offline safe reinforcement learning (RL) algorithms promise to learn policies that satisfy safety constraints directly in offline datasets without interacting with the environment. This arrangement is particularly important in scenarios with high sampling costs and potential dangers, such as autonomous driving and robotics. However, the influence of safety constraints and out-of-distribution (OOD) actions have made it challenging for previous methods to achieve high reward returns while ensuring safety. In this work, we propose a Variational Optimization with Conservative Eestimation algorithm (VOCE) to solve the problem of optimizing safety policies in the offline dataset. Concretely, we reframe the problem of offline safe RL using probabilistic inference, which introduces variational distributions to make the optimization of policies more flexible. Subsequently, we utilize pessimistic estimation methods to estimate the Q-value of cost and reward, which mitigates the extrapolation errors induced by OOD actions. Finally, extensive experiments demonstrate that the VOCE algorithm achieves competitive performance across multiple experimental tasks, particularly outperforming state-of-the-art algorithms in terms of safety.
VOCE: Variational Optimization with Conservative Estimation for Offline Safe Reinforcement Learning
[ "Jiayi Guan", "Guang Chen", "Jiaming Ji", "Long Yang", "Ao Zhou", "Zhijun Li", "changjun jiang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sFGkL5BsPi
@inproceedings{ li2023qdm, title={Q-{DM}: An Efficient Low-bit Quantized Diffusion Model}, author={Yanjing Li and Sheng Xu and Xianbin Cao and Xiao Sun and Baochang Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sFGkL5BsPi} }
Denoising diffusion generative models are capable of generating high-quality data, but suffers from the computation-costly generation process, due to a iterative noise estimation using full-precision networks. As an intuitive solution, quantization can significantly reduce the computational and memory consumption by low-bit parameters and operations. However, low-bit noise estimation networks in diffusion models (DMs) remain unexplored yet and perform much worse than the full-precision counterparts as observed in our experimental studies. In this paper, we first identify that the bottlenecks of low-bit quantized DMs come from a large distribution oscillation on activations and accumulated quantization error caused by the multi-step denoising process. To address these issues, we first develop a Timestep-aware Quantization (TaQ) method and a Noise-estimating Mimicking (NeM) scheme for low-bit quantized DMs (Q-DM) to effectively eliminate such oscillation and accumulated error respectively, leading to well-performed low-bit DMs. In this way, we propose an efficient Q-DM to calculate low-bit DMs by considering both training and inference process in the same framework. We evaluate our methods on popular DDPM and DDIM models. Extensive experimental results show that our method achieves a much better performance than the prior arts. For example, the 4-bit Q-DM theoretically accelerates the 1000-step DDPM by 7.8x and achieves a FID score of 5.17, on the unconditional CIFAR-10 dataset.
Q-DM: An Efficient Low-bit Quantized Diffusion Model
[ "Yanjing Li", "Sheng Xu", "Xianbin Cao", "Xiao Sun", "Baochang Zhang" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sC4RbbVKbu
@inproceedings{ bergsma2023sutranets, title={SutraNets: Sub-series Autoregressive Networks for Long-Sequence, Probabilistic Forecasting}, author={Shane Bergsma and Tim Zeyl and Lei Guo}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sC4RbbVKbu} }
We propose SutraNets, a novel method for neural probabilistic forecasting of long-sequence time series. SutraNets use an autoregressive generative model to factorize the likelihood of long sequences into products of conditional probabilities. When generating long sequences, most autoregressive approaches suffer from harmful error accumulation, as well as challenges in modeling long-distance dependencies. SutraNets treat long, univariate prediction as multivariate prediction over lower-frequency sub-series. Autoregression proceeds across time and across sub-series in order to ensure coherent multivariate (and, hence, high-frequency univariate) outputs. Since sub-series can be generated using fewer steps, SutraNets effectively reduce error accumulation and signal path distances. We find SutraNets to significantly improve forecasting accuracy over competitive alternatives on six real-world datasets, including when we vary the number of sub-series and scale up the depth and width of the underlying sequence models.
SutraNets: Sub-series Autoregressive Networks for Long-Sequence, Probabilistic Forecasting
[ "Shane Bergsma", "Tim Zeyl", "Lei Guo" ]
Conference
poster
2312.14880
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sABYNWKcwK
@inproceedings{ jiang2023doubly, title={Doubly Robust Augmented Transfer for Meta-Reinforcement Learning}, author={Yuankun Jiang and Nuowen Kan and Chenglin Li and Wenrui Dai and Junni Zou and Hongkai Xiong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=sABYNWKcwK} }
Meta-reinforcement learning (Meta-RL), though enabling a fast adaptation to learn new skills by exploiting the common structure shared among different tasks, suffers performance degradation in the sparse-reward setting. Current hindsight-based sample transfer approaches can alleviate this issue by transferring relabeled trajectories from other tasks to a new task so as to provide informative experience for the target reward function, but are unfortunately constrained with the unrealistic assumption that tasks differ only in reward functions. In this paper, we propose a doubly robust augmented transfer (DRaT) approach, aiming at addressing the more general sparse reward meta-RL scenario with both dynamics mismatches and varying reward functions across tasks. Specifically, we design a doubly robust augmented estimator for efficient value-function evaluation, which tackles dynamics mismatches with the optimal importance weight of transition distributions achieved by minimizing the theoretically derived upper bound of mean squared error (MSE) between the estimated values of transferred samples and their true values in the target task. Due to its intractability, we then propose an interval-based approximation to this optimal importance weight, which is guaranteed to cover the optimum with a constrained and sample-independent upper bound on the MSE approximation error. Based on our theoretical findings, we finally develop a DRaT algorithm for transferring informative samples across tasks during the training of meta-RL. We implement DRaT on an off-policy meta-RL baseline, and empirically show that it significantly outperforms other hindsight-based approaches on various sparse-reward MuJoCo locomotion tasks with varying dynamics and reward functions.
Doubly Robust Augmented Transfer for Meta-Reinforcement Learning
[ "Yuankun Jiang", "Nuowen Kan", "Chenglin Li", "Wenrui Dai", "Junni Zou", "Hongkai Xiong" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=s97ezbqoDZ
@inproceedings{ ye2023rhbrainfs, title={{RH}-Brain{FS}: Regional Heterogeneous Multimodal Brain Networks Fusion Strategy}, author={Hongting Ye and Yalu Zheng and Yueying Li and Ke Zhang and Youyong Kong and Yonggui Yuan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=s97ezbqoDZ} }
Multimodal fusion has become an important research technique in neuroscience that completes downstream tasks by extracting complementary information from multiple modalities. Existing multimodal research on brain networks mainly focuses on two modalities, structural connectivity (SC) and functional connectivity (FC). Recently, extensive literature has shown that the relationship between SC and FC is complex and not a simple one-to-one mapping. The coupling of structure and function at the regional level is heterogeneous. However, all previous studies have neglected the modal regional heterogeneity between SC and FC and fused their representations via "simple patterns", which are inefficient ways of multimodal fusion and affect the overall performance of the model. In this paper, to alleviate the issue of regional heterogeneity of multimodal brain networks, we propose a novel Regional Heterogeneous multimodal Brain networks Fusion Strategy (RH-BrainFS). Briefly, we introduce a brain subgraph networks module to extract regional characteristics of brain networks, and further use a new transformer-based fusion bottleneck module to alleviate the issue of regional heterogeneity between SC and FC. To the best of our knowledge, this is the first paper to explicitly state the issue of structural-functional modal regional heterogeneity and to propose a solution. Extensive experiments demonstrate that the proposed method outperforms several state-of-the-art methods in a variety of neuroscience tasks.
RH-BrainFS: Regional Heterogeneous Multimodal Brain Networks Fusion Strategy
[ "Hongting Ye", "Yalu Zheng", "Yueying Li", "Ke Zhang", "Youyong Kong", "Yonggui Yuan" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=s8QsYV1VZ2
@inproceedings{ luo2023and, title={{AND}: Adversarial Neural Degradation for Learning Blind Image Super-Resolution}, author={Fangzhou Luo and Xiaolin Wu and Yanhui Guo}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=s8QsYV1VZ2} }
Learnt deep neural networks for image super-resolution fail easily if the assumed degradation model in training mismatches that of the real degradation source at the inference stage. Instead of attempting to exhaust all degradation variants in simulation, which is unwieldy and impractical, we propose a novel adversarial neural degradation (AND) model that can, when trained in conjunction with a deep restoration neural network under a minmax criterion, generate a wide range of highly nonlinear complex degradation effects without any explicit supervision. The AND model has a unique advantage over the current state of the art in that it can generalize much better to unseen degradation variants and hence deliver significantly improved restoration performance on real-world images.
AND: Adversarial Neural Degradation for Learning Blind Image Super-Resolution
[ "Fangzhou Luo", "Xiaolin Wu", "Yanhui Guo" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=s86M8naPSv
@inproceedings{ maene2023softunification, title={Soft-Unification in Deep Probabilistic Logic}, author={Jaron Maene and Luc De Raedt}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=s86M8naPSv} }
A fundamental challenge in neuro-symbolic AI is to devise primitives that fuse the logical and neural concepts. The Neural Theorem Prover has proposed the notion of soft-unification to turn the symbolic comparison between terms (i.e. unification) into a comparison in embedding space. It has been shown that soft-unification is a powerful mechanism that can be used to learn logic rules in an end-to-end differentiable manner. We study soft-unification from a conceptual point and outline several desirable properties of this operation. These include non-redundancy in the proof, well-defined proof scores, and non-sparse gradients. Unfortunately, these properties are not satisfied by previous systems such as the Neural Theorem Prover. Therefore, we introduce a more principled framework called DeepSoftLog based on probabilistic rather than fuzzy semantics. Our experiments demonstrate that DeepSoftLog can outperform the state-of-the-art on neuro-symbolic benchmarks, highlighting the benefits of these properties.
Soft-Unification in Deep Probabilistic Logic
[ "Jaron Maene", "Luc De Raedt" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=s7xWeJQACI
@inproceedings{ shi2023dont, title={Don{\textquoteright}t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner}, author={Zhengxiang Shi and Aldo Lipani}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=s7xWeJQACI} }
Language models (LMs) trained on vast quantities of unlabelled data have greatly advanced the field of natural language processing (NLP). In this study, we re-visit the widely accepted notion in NLP that continued pre-training LMs on task-related texts improves the performance of fine-tuning (FT) in downstream tasks. Through experiments on eight single-sentence tasks and eight sentence-pair tasks in both semi-supervised and fully-supervised settings, we find that conventional continued pre-training does not consistently provide benefits and can even be detrimental for sentence-pair tasks or when prompt-based FT is used. To tackle these issues, we propose Prompt-based Continued Pre-training (PCP), which combines the idea of instruction tuning with conventional continued pre-training. Our approach aims to improve the performance of prompt-based FT by presenting both task-related texts and prompt templates to LMs through unsupervised pre-training objectives before fine-tuning for the target task. Our empirical evaluations on 21 benchmarks demonstrate that the PCP consistently improves the performance of state-of-the-art prompt-based FT approaches (up to 20.1% absolute) in both semi-supervised and fully-supervised settings, even with only hundreds of unlabelled examples. Additionally, prompt-based FT with PCP outperforms state-of-the-art semi-supervised approaches with greater simplicity, eliminating the need for an iterative process and extra data augmentation. Our further analysis explores the performance lower bound of the PCP and reveals that the advantages of PCP persist across different sizes of models and datasets.
Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner
[ "Zhengxiang Shi", "Aldo Lipani" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=s1jQ91yFAb
@inproceedings{ boll2023on, title={On Certified Generalization in Structured Prediction}, author={Bastian Boll and Christoph Schnoerr}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=s1jQ91yFAb} }
In structured prediction, target objects have rich internal structure which does not factorize into independent components and violates common i.i.d. assumptions. This challenge becomes apparent through the exponentially large output space in applications such as image segmentation or scene graph generation. We present a novel PAC-Bayesian risk bound for structured prediction wherein the rate of generalization scales not only with the number of structured examples but also with their size. The underlying assumption, conforming to ongoing research on generative models, is that data are generated by the Knothe-Rosenblatt rearrangement of a factorizing reference measure. This allows to explicitly distill the structure between random output variables into a Wasserstein dependency matrix. Our work makes a preliminary step towards leveraging powerful generative models to establish generalization bounds for discriminative downstream tasks in the challenging setting of structured prediction.
On Certified Generalization in Structured Prediction
[ "Bastian Boll", "Christoph Schnoerr" ]
Conference
poster
2306.09112
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=s1FjXzJ0jy
@inproceedings{ tworkowski2023focused, title={Focused Transformer: Contrastive Training for Context Scaling}, author={Szymon Tworkowski and Konrad Staniszewski and Miko{\l}aj Pacek and Yuhuai Wu and Henryk Michalewski and Piotr Mi{\l}o{\'s}}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=s1FjXzJ0jy} }
Large language models have an exceptional capability to incorporate new information in a contextual manner. However, the full potential of such an approach is often restrained due to a limitation in the effective context length. One solution to this issue is to endow an attention layer with access to an additional context, which comprises of (key, value) pairs. Yet, as the number of documents increases, the proportion of relevant keys to irrelevant ones decreases, leading the model to focus more on the irrelevant keys. We identify a significant challenge, dubbed the distraction issue, where keys linked to different semantic values might overlap, making them hard to distinguish. To tackle this problem, we introduce the Focused Transformer (FoT), a technique that employs a training process inspired by contrastive learning. This novel approach enhances the structure of the (key, value) space, enabling an extension of the context length. Our method allows for fine-tuning pre-existing, large-scale models to lengthen their effective context. This is demonstrated by our fine-tuning of $3 B$ and $7 B$ OpenLLaMA checkpoints. The resulting models, which we name LongLLaMA, exhibit advancements in tasks requiring a long context. We further illustrate that our LongLLaMA models adeptly manage a $256 k$ context length for passkey retrieval.
Focused Transformer: Contrastive Training for Context Scaling
[ "Szymon Tworkowski", "Konrad Staniszewski", "Mikołaj Pacek", "Yuhuai Wu", "Henryk Michalewski", "Piotr Miłoś" ]
Conference
poster
2307.03170
[ "https://github.com/cstankonrad/long_llama" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rzlqOVExUA
@inproceedings{ wang2023galopa, title={{GALOPA}: Graph Transport Learning with Optimal Plan Alignment}, author={Yejiang Wang and Yuhai Zhao and Daniel Zhengkui Wang and Ling Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rzlqOVExUA} }
Self-supervised learning on graph aims to learn graph representations in an unsupervised manner. While graph contrastive learning (GCL - relying on graph augmentation for creating perturbation views of anchor graphs and maximizing/minimizing similarity for positive/negative pairs) is a popular self-supervised method, it faces challenges in finding label-invariant augmented graphs and determining the exact extent of similarity between sample pairs to be achieved. In this work, we propose an alternative self-supervised solution that (i) goes beyond the label invariance assumption without distinguishing between positive/negative samples, (ii) can calibrate the encoder for preserving not only the structural information inside the graph, but the matching information between different graphs, (iii) learns isometric embeddings that preserve the distance between graphs, a by-product of our objective. Motivated by optimal transport theory, this scheme relays on an observation that the optimal transport plans between node representations at the output space, which measure the matching probability between two distributions, should be consistent to the plans between the corresponding graphs at the input space. The experimental findings include: (i) The plan alignment strategy significantly outperforms the counterpart using the transport distance; (ii) The proposed model shows superior performance using only node attributes as calibration signals, without relying on edge information; (iii) Our model maintains robust results even under high perturbation rates; (iv) Extensive experiments on various benchmarks validate the effectiveness of the proposed method.
GALOPA: Graph Transport Learning with Optimal Plan Alignment
[ "Yejiang Wang", "Yuhai Zhao", "Daniel Zhengkui Wang", "Ling Li" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rzDBoh1tBh
@inproceedings{ wu2023private, title={Private Federated Frequency Estimation: Adapting to the Hardness of the Instance}, author={Jingfeng Wu and Wennan Zhu and Peter Kairouz and Vladimir Braverman}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rzDBoh1tBh} }
In federated frequency estimation (FFE), multiple clients work together to estimate the frequency of their local data by communicating with a server, while maintaining the security constraint of $\mathtt{secsum}$ where the server can only access the sum of client-held vectors. For FFE with a single communication round, it is known that count sketch is nearly information-theoretically optimal [Chen et al., 2022]. However, when multiple communication rounds are allowed, we propose a new sketch algorithm that is provably more accurate than a naive adaptation of count sketch. Furthermore, we show that both our sketch algorithm and count sketch can achieve better accuracy when the problem instance is simpler. Therefore, we propose a two-phase approach to enable the use of a smaller sketch size for simpler problems. Finally, we provide mechanisms to make our proposed algorithm differentially private. We verify the performance of our methods through experiments conducted on real datasets.
Private Federated Frequency Estimation: Adapting to the Hardness of the Instance
[ "Jingfeng Wu", "Wennan Zhu", "Peter Kairouz", "Vladimir Braverman" ]
Conference
poster
2306.09396
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rybsHQ4DXy
@inproceedings{ nagarajan2023egoenv, title={EgoEnv: Human-centric environment representations from egocentric video}, author={Tushar Nagarajan and Santhosh Kumar Ramakrishnan and Ruta Desai and James Hillis and Kristen Grauman}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rybsHQ4DXy} }
First-person video highlights a camera-wearer's activities in the context of their persistent environment. However, current video understanding approaches reason over visual features from short video clips that are detached from the underlying physical space and capture only what is immediately visible. To facilitate human-centric environment understanding, we present an approach that links egocentric video and the environment by learning representations that are predictive of the camera-wearer's (potentially unseen) local surroundings. We train such models using videos from agents in simulated 3D environments where the environment is fully observable, and test them on human-captured real-world videos from unseen environments. On two human-centric video tasks, we show that models equipped with our environment-aware features consistently outperform their counterparts with traditional clip features. Moreover, despite being trained exclusively on simulated videos, our approach successfully handles real-world videos from HouseTours and Ego4D, and achieves state-of-the-art results on the Ego4D NLQ challenge.
EgoEnv: Human-centric environment representations from egocentric video
[ "Tushar Nagarajan", "Santhosh Kumar Ramakrishnan", "Ruta Desai", "James Hillis", "Kristen Grauman" ]
Conference
oral
2207.11365
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rxsCTtkqA9
@inproceedings{ saha2023matrix, title={Matrix Compression via Randomized Low Rank and Low Precision Factorization}, author={Rajarshi Saha and Varun Srivastava and Mert Pilanci}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rxsCTtkqA9} }
Matrices are exceptionally useful in various fields of study as they provide a convenient framework to organize and manipulate data in a structured manner. However, modern matrices can involve billions of elements, making their storage and processing quite demanding in terms of computational resources and memory usage. Although prohibitively large, such matrices are often approximately low rank. We propose an algorithm that exploits this structure to obtain a low rank decomposition of any matrix $\mathbf{A}$ as $\mathbf{A} \approx \mathbf{L}\mathbf{R}$, where $\mathbf{L}$ and $\mathbf{R}$ are the low rank factors. The total number of elements in $\mathbf{L}$ and $\mathbf{R}$ can be significantly less than that in $\mathbf{A}$. Furthermore, the entries of $\mathbf{L}$ and $\mathbf{R}$ are quantized to low precision formats -- compressing $\mathbf{A}$ by giving us a low rank and low precision factorization. Our algorithm first computes an approximate basis of the range space of $\mathbf{A}$ by randomly sketching its columns, followed by a quantization of the vectors constituting this basis. It then computes approximate projections of the columns of $\mathbf{A}$ onto this quantized basis. We derive upper bounds on the approximation error of our algorithm, and analyze the impact of target rank and quantization bit-budget. The tradeoff between compression ratio and approximation accuracy allows for flexibility in choosing these parameters based on specific application requirements. We empirically demonstrate the efficacy of our algorithm in image compression, nearest neighbor classification of image and text embeddings, and compressing the layers of LlaMa-$7$b. Our results illustrate that we can achieve compression ratios as aggressive as one bit per matrix coordinate, all while surpassing or maintaining the performance of traditional compression techniques.
Matrix Compression via Randomized Low Rank and Low Precision Factorization
[ "Rajarshi Saha", "Varun Srivastava", "Mert Pilanci" ]
Conference
poster
2310.11028
[ "https://github.com/pilancilab/matrix-compressor" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rwrblCYb2A
@inproceedings{ scotti2023reconstructing, title={Reconstructing the Mind's Eye: f{MRI}-to-Image with Contrastive Learning and Diffusion Priors}, author={Paul Steven Scotti and Atmadeep Banerjee and Jimmie Goode and Stepan Shabalin and Alex Nguyen and Cohen Ethan and Aidan James Dempster and Nathalie Verlinde and Elad Yundler and David Weisberg and Kenneth Norman and Tanishq Mathew Abraham}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rwrblCYb2A} }
We present MindEye, a novel fMRI-to-image approach to retrieve and reconstruct viewed images from brain activity. Our model comprises two parallel submodules that are specialized for retrieval (using contrastive learning) and reconstruction (using a diffusion prior). MindEye can map fMRI brain activity to any high dimensional multimodal latent space, like CLIP image space, enabling image reconstruction using generative models that accept embeddings from this latent space. We comprehensively compare our approach with other existing methods, using both qualitative side-by-side comparisons and quantitative evaluations, and show that MindEye achieves state-of-the-art performance in both reconstruction and retrieval tasks. In particular, MindEye can retrieve the exact original image even among highly similar candidates indicating that its brain embeddings retain fine-grained image-specific information. This allows us to accurately retrieve images even from large-scale databases like LAION-5B. We demonstrate through ablations that MindEye's performance improvements over previous methods result from specialized submodules for retrieval and reconstruction, improved training techniques, and training models with orders of magnitude more parameters. Furthermore, we show that MindEye can better preserve low-level image features in the reconstructions by using img2img, with outputs from a separate autoencoder. All code is available on GitHub.
Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors
[ "Paul Steven Scotti", "Atmadeep Banerjee", "Jimmie Goode", "Stepan Shabalin", "Alex Nguyen", "Cohen Ethan", "Aidan James Dempster", "Nathalie Verlinde", "Elad Yundler", "David Weisberg", "Kenneth Norman", "Tanishq Mathew Abraham" ]
Conference
spotlight
2305.18274
[ "https://github.com/medarc-ai/fmri-reconstruction-nsd" ]
https://huggingface.co/papers/2305.18274
6
4
1
12
1
[]
[]
[]
null
https://openreview.net/forum?id=rwbzMiuFQl
@inproceedings{ lepori2023break, title={Break It Down: Evidence for Structural Compositionality in Neural Networks}, author={Michael A. Lepori and Thomas Serre and Ellie Pavlick}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rwbzMiuFQl} }
Though modern neural networks have achieved impressive performance in both vision and language tasks, we know little about the functions that they implement. One possibility is that neural networks implicitly break down complex tasks into subroutines, implement modular solutions to these subroutines, and compose them into an overall solution to a task --- a property we term structural compositionality. Another possibility is that they may simply learn to match new inputs to learned templates, eliding task decomposition entirely. Here, we leverage model pruning techniques to investigate this question in both vision and language across a variety of architectures, tasks, and pretraining regimens. Our results demonstrate that models oftentimes implement solutions to subroutines via modular subnetworks, which can be ablated while maintaining the functionality of other subnetworks. This suggests that neural networks may be able to learn compositionality, obviating the need for specialized symbolic mechanisms.
Break It Down: Evidence for Structural Compositionality in Neural Networks
[ "Michael A. Lepori", "Thomas Serre", "Ellie Pavlick" ]
Conference
spotlight
[ "https://github.com/mlepori1/compositional_subnetworks" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rsrfEIdawr
@inproceedings{ song2023drf, title={D\"a{RF}: Boosting Radiance Fields from Sparse Input Views with Monocular Depth Adaptation}, author={Jiuhn Song and Seonghoon Park and Honggyu An and Seokju Cho and Min-Seop Kwak and Sungjin Cho and Seungryong Kim}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rsrfEIdawr} }
Neural radiance field (NeRF) shows powerful performance in novel view synthesis and 3D geometry reconstruction, but it suffers from critical performance degradation when the number of known viewpoints is drastically reduced. Existing works attempt to overcome this problem by employing external priors, but their success is limited to certain types of scenes or datasets. Employing monocular depth estimation (MDE) networks, pretrained on large-scale RGB-D datasets, with powerful generalization capability may be a key to solving this problem: however, using MDE in conjunction with NeRF comes with a new set of challenges due to various ambiguity problems exhibited by monocular depths. In this light, we propose a novel framework, dubbed DäRF, that achieves robust NeRF reconstruction with a handful of real-world images by combining the strengths of NeRF and monocular depth estimation through online complementary training. Our framework imposes the MDE network's powerful geometry prior to NeRF representation at both seen and unseen viewpoints to enhance its robustness and coherence. In addition, we overcome the ambiguity problems of monocular depths through patch-wise scale-shift fitting and geometry distillation, which adapts the MDE network to produce depths aligned accurately with NeRF geometry. Experiments show our framework achieves state-of-the-art results both quantitatively and qualitatively, demonstrating consistent and reliable performance in both indoor and outdoor real-world datasets.
DäRF: Boosting Radiance Fields from Sparse Input Views with Monocular Depth Adaptation
[ "Jiuhn Song", "Seonghoon Park", "Honggyu An", "Seokju Cho", "Min-Seop Kwak", "Sungjin Cho", "Seungryong Kim" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rqE0fEQDqs
@inproceedings{ chen2023pointgpt, title={Point{GPT}: Auto-regressively Generative Pre-training from Point Clouds}, author={Guangyan Chen and Meiling Wang and Yi Yang and Kai Yu and Li Yuan and Yufeng Yue}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rqE0fEQDqs} }
Large language models (LLMs) based on the generative pre-training transformer (GPT) have demonstrated remarkable effectiveness across a diverse range of downstream tasks. Inspired by the advancements of the GPT, we present PointGPT, a novel approach that extends the concept of GPT to point clouds, addressing the challenges associated with disorder properties, low information density, and task gaps. Specifically, a point cloud auto-regressive generation task is proposed to pre-train transformer models. Our method partitions the input point cloud into multiple point patches and arranges them in an ordered sequence based on their spatial proximity. Then, an extractor-generator based transformer decode, with a dual masking strategy, learns latent representations conditioned on the preceding point patches, aiming to predict the next one in an auto-regressive manner. To explore scalability and enhance performance, a larger pre-training dataset is collected. Additionally, a subsequent post-pre-training stage is introduced, incorporating a labeled hybrid dataset. Our scalable approach allows for learning high-capacity models that generalize well, achieving state-of-the-art performance on various downstream tasks. In particular, our approach achieves classification accuracies of 94.9% on the ModelNet40 dataset and 93.4% on the ScanObjectNN dataset, outperforming all other transformer models. Furthermore, our method also attains new state-of-the-art accuracies on all four few-shot learning benchmarks. Codes are available at https://github.com/CGuangyan-BIT/PointGPT.
PointGPT: Auto-regressively Generative Pre-training from Point Clouds
[ "Guangyan Chen", "Meiling Wang", "Yi Yang", "Kai Yu", "Li Yuan", "Yufeng Yue" ]
Conference
poster
2305.11487
[ "https://github.com/CGuangyan-BIT/PointGPT" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rpuEARqB54
@inproceedings{ pour2023on, title={On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences}, author={Alireza Fathollah Pour and Hassan Ashtiani}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rpuEARqB54} }
We consider the class of noisy multi-layered sigmoid recurrent neural networks with $w$ (unbounded) weights for classification of sequences of length $T$, where independent noise distributed according to $\mathcal{N}(0,\sigma^2)$ is added to the output of each neuron in the network. Our main result shows that the sample complexity of PAC learning this class can be bounded by $O (w\log(T/\sigma))$. For the non-noisy version of the same class (i.e., $\sigma=0$), we prove a lower bound of $\Omega (wT)$ for the sample complexity. Our results indicate an exponential gap in the dependence of sample complexity on $T$ for noisy versus non-noisy networks. Moreover, given the mild logarithmic dependence of the upper bound on $1/\sigma$, this gap still holds even for numerically negligible values of $\sigma$.
On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences
[ "Alireza Fathollah Pour", "Hassan Ashtiani" ]
Conference
poster
2305.18423
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=roGYQvarnC
@inproceedings{ purushwalkam2023conrad, title={ConRad: Image Constrained Radiance Fields for 3D Generation from a Single Image}, author={Senthil Purushwalkam and Nikhil Naik}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=roGYQvarnC} }
We present a novel method for reconstructing 3D objects from a single RGB image. Our method leverages the latest image generation models to infer the hidden 3D structure while remaining faithful to the input image. While existing methods obtain impressive results in generating 3D models from text prompts, they do not provide an easy approach for conditioning on input RGB data. Naive extensions of these methods often lead to improper alignment in appearance between the input image and the 3D reconstructions. We address these challenges by introducing Image Constrained Radiance Fields (ConRad), a novel variant of neural radiance fields. ConRad is an efficient 3D representation that explicitly captures the appearance of an input image in one viewpoint. We propose a training algorithm that leverages the single RGB image in conjunction with pretrained Diffusion Models to optimize the parameters of a ConRad representation. Extensive experiments show that ConRad representations can simplify preservation of image details while producing a realistic 3D reconstruction. Compared to existing state-of-the-art baselines, we show that our 3D reconstructions remain more faithful to the input and produce more consistent 3D models while demonstrating significantly improved quantitative performance on a ShapeNet object benchmark.
ConRad: Image Constrained Radiance Fields for 3D Generation from a Single Image
[ "Senthil Purushwalkam", "Nikhil Naik" ]
Conference
poster
2311.05230
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rnKgbKmelt
@inproceedings{ sun2023adaplanner, title={AdaPlanner: Adaptive Planning from Feedback with Language Models}, author={Haotian Sun and Yuchen Zhuang and Lingkai Kong and Bo Dai and Chao Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rnKgbKmelt} }
Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks. However, most existing methods either take actions greedily without planning or rely on static plans that are not adaptable to environmental feedback. Consequently, the sequential decision-making performance of LLM agents degenerates with problem complexity and plan horizons increase. We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback. In AdaPlanner, the LLM agent adaptively refines its plan from feedback with both in-plan and out-of-plan refinement strategies. To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities. Furthermore, we propose a skill discovery mechanism that leverages successful plans as few-shot exemplars, enabling the agent to plan and refine with fewer task demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and 4.11% while utilizing 2x and 600x fewer samples, respectively. The implementation of AdaPlanner is available at https://github.com/haotiansun14/AdaPlanner.
AdaPlanner: Adaptive Planning from Feedback with Language Models
[ "Haotian Sun", "Yuchen Zhuang", "Lingkai Kong", "Bo Dai", "Chao Zhang" ]
Conference
poster
2305.16653
[ "https://github.com/haotiansun14/adaplanner" ]
https://huggingface.co/papers/2305.16653
4
0
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=rmQgQCZWiP
@inproceedings{ zhang2023managing, title={Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-off}, author={Zichen Zhang and Johannes Kirschner and Junxi Zhang and Francesco Zanini and Alex Ayoub and Masood Dehghan and Dale Schuurmans}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rmQgQCZWiP} }
A default assumption in reinforcement learning (RL) and optimal control is that observations arrive at discrete time points on a fixed clock cycle. Yet, many applications involve continuous-time systems where the time discretization, in principle, can be managed. The impact of time discretization on RL methods has not been fully characterized in existing theory, but a more detailed analysis of its effect could reveal opportunities for improving data-efficiency. We address this gap by analyzing Monte-Carlo policy evaluation for LQR systems and uncover a fundamental trade-off between approximation and statistical error in value estimation. Importantly, these two errors behave differently to time discretization, leading to an optimal choice of temporal resolution for a given data budget. These findings show that managing the temporal resolution can provably improve policy evaluation efficiency in LQR systems with finite data. Empirically, we demonstrate the trade-off in numerical simulations of LQR instances and standard RL benchmarks for non-linear continuous control.
Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-off
[ "Zichen Zhang", "Johannes Kirschner", "Junxi Zhang", "Francesco Zanini", "Alex Ayoub", "Masood Dehghan", "Dale Schuurmans" ]
Conference
poster
2212.08949
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rlPUJ60bwM
@inproceedings{ blain2023false, title={False Discovery Proportion control for aggregated Knockoffs}, author={Alexandre Blain and Bertrand Thirion and Olivier Grisel and Pierre Neuvial}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rlPUJ60bwM} }
Controlled variable selection is an important analytical step in various scientific fields, such as brain imaging or genomics. In these high-dimensional data settings, considering too many variables leads to poor models and high costs, hence the need for statistical guarantees on false positives. Knockoffs are a popular statistical tool for conditional variable selection in high dimension. However, they control for the expected proportion of false discoveries (FDR) and not the actual proportion of false discoveries (FDP). We present a new method, KOPI, that controls the proportion of false discoveries for Knockoff-based inference. The proposed method also relies on a new type of aggregation to address the undesirable randomness associated with classical Knockoff inference. We demonstrate FDP control and substantial power gains over existing Knockoff-based methods in various simulation settings and achieve good sensitivity/specificity tradeoffs on brain imaging data.
False Discovery Proportion control for aggregated Knockoffs
[ "Alexandre Blain", "Bertrand Thirion", "Olivier Grisel", "Pierre Neuvial" ]
Conference
poster
2310.10373
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rih3hsSWx8
@inproceedings{ wang2023transformed, title={Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks}, author={Andong Wang and Chao Li and Mingyuan Bai and Zhong Jin and Guoxu Zhou and Qibin Zhao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rih3hsSWx8} }
Multi-channel learning has gained significant attention in recent applications, where neural networks with t-product layers (t-NNs) have shown promising performance through novel feature mapping in the transformed domain. However, despite the practical success of t-NNs, the theoretical analysis of their generalization remains unexplored. We address this gap by deriving upper bounds on the generalization error of t-NNs in both standard and adversarial settings. Notably, it reveals that t-NNs compressed with exact transformed low-rank parameterization can achieve tighter adversarial generalization bounds compared to non-compressed models. While exact transformed low-rank weights are rare in practice, the analysis demonstrates that through adversarial training with gradient flow, highly over-parameterized t-NNs with the ReLU activation can be implicitly regularized towards a transformed low-rank parameterization under certain conditions. Moreover, this paper establishes sharp adversarial generalization bounds for t-NNs with approximately transformed low-rank weights. Our analysis highlights the potential of transformed low-rank parameterization in enhancing the robust generalization of t-NNs, offering valuable insights for further research and development.
Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks
[ "Andong Wang", "Chao Li", "Mingyuan Bai", "Zhong Jin", "Guoxu Zhou", "Qibin Zhao" ]
Conference
poster
2303.00196
[ "https://github.com/pingzaiwang/Analysis4TNN" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rheCTpRrxI
@inproceedings{ kolotouros2023dreamhuman, title={DreamHuman: Animatable 3D Avatars from Text}, author={Nikos Kolotouros and Thiemo Alldieck and Andrei Zanfir and Eduard Gabriel Bazavan and Mihai Fieraru and Cristian Sminchisescu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rheCTpRrxI} }
We present \emph{DreamHuman}, a method to generate realistic animatable 3D human avatar models entirely from textual descriptions. Recent text-to-3D methods have made considerable strides in generation, but are still lacking in important aspects. Control and often spatial resolution remain limited, existing methods produce fixed rather than 3D human models that can be placed in different poses (i.e. re-posable or animatable), and anthropometric consistency for complex structures like people remains a challenge. \emph{DreamHuman} connects large text-to-image synthesis models, neural radiance fields, and statistical human body models in a novel optimization framework. This makes it possible to generate dynamic 3D human avatars with high-quality textures and learnt per-instance rigid and non rigid geometric deformations. We demonstrate that our method is capable to generate a wide variety of animatable, realistic 3D human models from text. These have diverse appearance, clothing, skin tones and body shapes, and outperform both generic text-to-3D approaches and previous text-based 3D avatar generators in visual fidelity.
DreamHuman: Animatable 3D Avatars from Text
[ "Nikos Kolotouros", "Thiemo Alldieck", "Andrei Zanfir", "Eduard Gabriel Bazavan", "Mihai Fieraru", "Cristian Sminchisescu" ]
Conference
spotlight
2306.09329
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rhIfzCZoXG
@inproceedings{ saveski2023counterfactual, title={Counterfactual Evaluation of Peer-Review Assignment Policies}, author={Martin Saveski and Steven Jecmen and Nihar B Shah and Johan Ugander}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rhIfzCZoXG} }
Peer review assignment algorithms aim to match research papers to suitable expert reviewers, working to maximize the quality of the resulting reviews. A key challenge in designing effective assignment policies is evaluating how changes to the assignment algorithm map to changes in review quality. In this work, we leverage recently proposed policies that introduce randomness in peer-review assignment—in order to mitigate fraud—as a valuable opportunity to evaluate counterfactual assignment policies. Specifically, we exploit how such randomized assignments provide a positive probability of observing the reviews of many assignment policies of interest. To address challenges in applying standard off-policy evaluation methods, such as violations of positivity, we introduce novel methods for partial identification based on monotonicity and Lipschitz smoothness assumptions for the mapping between reviewer-paper covariates and outcomes. We apply our methods to peer-review data from two computer science venues: the TPDP'21 workshop (95 papers and 35 reviewers) and the AAAI'22 conference (8,450 papers and 3,145 reviewers). We consider estimates of (i) the effect on review quality when changing weights in the assignment algorithm, e.g., weighting reviewers' bids vs. textual similarity (between the review's past papers and the submission), and (ii) the "cost of randomization", capturing the difference in expected quality between the perturbed and unperturbed optimal match. We find that placing higher weight on text similarity results in higher review quality and that introducing randomization in the reviewer-paper assignment only marginally reduces the review quality. Our methods for partial identification may be of independent interest, while our off-policy approach can likely find use in evaluating a broad class of algorithmic matching systems.
Counterfactual Evaluation of Peer-Review Assignment Policies
[ "Martin Saveski", "Steven Jecmen", "Nihar B Shah", "Johan Ugander" ]
Conference
spotlight
2305.17339
[ "https://github.com/msaveski/counterfactual-peer-review" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rfcak9EV99
@inproceedings{ zhao2023policy, title={Policy Optimization for Continuous Reinforcement Learning}, author={Hanyang Zhao and Wenpin Tang and David Yao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rfcak9EV99} }
We study reinforcement learning (RL) in the setting of continuous time and space, for an infinite horizon with a discounted objective and the underlying dynamics driven by a stochastic differential equation. Built upon recent advances in the continuous approach to RL, we develop a notion of occupation time (specifically for a discounted objective), and show how it can be effectively used to derive performance difference and local approximation formulas. We further extend these results to illustrate their applications in the PG (policy gradient) and TRPO/PPO (trust region policy optimization/ proximal policy optimization) methods, which have been familiar and powerful tools in the discrete RL setting but under-developed in continuous RL. Through numerical experiments, we demonstrate the effectiveness and advantages of our approach.
Policy Optimization for Continuous Reinforcement Learning
[ "Hanyang Zhao", "Wenpin Tang", "David Yao" ]
Conference
poster
2305.18901
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rfTFJvTkr2
@inproceedings{ fang2023parallel, title={Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies}, author={Wei Fang and Zhaofei Yu and Zhaokun Zhou and Ding Chen and Yanqi Chen and Zhengyu Ma and Timoth{\'e}e Masquelier and Yonghong Tian}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rfTFJvTkr2} }
Vanilla spiking neurons in Spiking Neural Networks (SNNs) use charge-fire-reset neuronal dynamics, which can only be simulated serially and can hardly learn long-time dependencies. We find that when removing reset, the neuronal dynamics can be reformulated in a non-iterative form and parallelized. By rewriting neuronal dynamics without reset to a general formulation, we propose the Parallel Spiking Neuron (PSN), which generates hidden states that are independent of their predecessors, resulting in parallelizable neuronal dynamics and extremely high simulation speed. The weights of inputs in the PSN are fully connected, which maximizes the utilization of temporal information. To avoid the use of future inputs for step-by-step inference, the weights of the PSN can be masked, resulting in the masked PSN. By sharing weights across time-steps based on the masked PSN, the sliding PSN is proposed to handle sequences of varying lengths. We evaluate the PSN family on simulation speed and temporal/static data classification, and the results show the overwhelming advantage of the PSN family in efficiency and accuracy. To the best of our knowledge, this is the first study about parallelizing spiking neurons and can be a cornerstone for the spiking deep learning research. Our codes are available at https://github.com/fangwei123456/Parallel-Spiking-Neuron.
Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies
[ "Wei Fang", "Zhaofei Yu", "Zhaokun Zhou", "Ding Chen", "Yanqi Chen", "Zhengyu Ma", "Timothée Masquelier", "Yonghong Tian" ]
Conference
poster
2304.12760
[ "https://github.com/fangwei123456/parallel-spiking-neuron" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rcXXNFVlEn
@inproceedings{ prystawski2023why, title={Why think step by step? Reasoning emerges from the locality of experience}, author={Ben Prystawski and Michael Y. Li and Noah Goodman}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rcXXNFVlEn} }
Humans have a powerful and mysterious capacity to reason. Working through a set of mental steps enables us to make inferences we would not be capable of making directly even though we get no additional data from the world. Similarly, when large language models generate intermediate steps (a chain of thought) before answering a question, they often produce better answers than they would directly. We investigate why and how chain-of-thought reasoning is useful in language models, testing the hypothesis that reasoning is effective when training data consists of overlapping local clusters of variables that influence each other strongly. These training conditions enable the chaining of accurate local inferences to estimate relationships between variables that were not seen together in training. We prove that there will exist a "reasoning gap", where reasoning through intermediate variables reduces bias, for the simple case of an autoregressive density estimator trained on local samples from a chain-structured probabilistic model. We then test our hypothesis experimentally in more complex models, training an autoregressive language model on samples from Bayes nets but only including a subset of variables in each sample. We test language models’ ability to match conditional probabilities with and without intermediate reasoning steps, finding that intermediate steps are only helpful when the training data is locally structured with respect to dependencies between variables. The combination of locally structured observations and reasoning is much more data-efficient than training on all variables. Our results illustrate how the effectiveness of reasoning step by step is rooted in the local statistical structure of the training data.
Why think step by step? Reasoning emerges from the locality of experience
[ "Ben Prystawski", "Michael Y. Li", "Noah Goodman" ]
Conference
oral
2304.03843
[ "https://github.com/benpry/why-think-step-by-step" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rbw9xCU6Ci
@inproceedings{ bao2023adaptive, title={Adaptive Test-Time Personalization for Federated Learning}, author={Wenxuan Bao and Tianxin Wei and Haohan Wang and Jingrui He}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rbw9xCU6Ci} }
Personalized federated learning algorithms have shown promising results in adapting models to various distribution shifts. However, most of these methods require labeled data on testing clients for personalization, which is usually unavailable in real-world scenarios. In this paper, we introduce a novel setting called test-time personalized federated learning (TTPFL), where clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time. While traditional test-time adaptation (TTA) can be used in this scenario, most of them inherently assume training data come from a single domain, while they come from multiple clients (source domains) with different distributions. Overlooking these domain interrelationships can result in suboptimal generalization. Moreover, most TTA algorithms are designed for a specific kind of distribution shift and lack the flexibility to handle multiple kinds of distribution shifts in FL. In this paper, we find that this lack of flexibility partially results from their pre-defining which modules to adapt in the model. To tackle this challenge, we propose a novel algorithm called ATP to adaptively learns the adaptation rates for each module in the model from distribution shifts among source domains. Theoretical analysis proves the strong generalization of ATP. Extensive experiments demonstrate its superiority in handling various distribution shifts including label shift, image corruptions, and domain shift, outperforming existing TTA methods across multiple datasets and model architectures. Our code is available at https://github.com/baowenxuan/ATP.
Adaptive Test-Time Personalization for Federated Learning
[ "Wenxuan Bao", "Tianxin Wei", "Haohan Wang", "Jingrui He" ]
Conference
poster
2310.18816
[ "https://github.com/baowenxuan/atp" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rZqRu8e4uc
@inproceedings{ freymuth2023swarm, title={Swarm Reinforcement Learning for Adaptive Mesh Refinement}, author={Niklas Freymuth and Philipp Dahlinger and Tobias Daniel W{\"u}rth and Simon Reisch and Luise K{\"a}rger and Gerhard Neumann}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rZqRu8e4uc} }
The Finite Element Method, an important technique in engineering, is aided by Adaptive Mesh Refinement (AMR), which dynamically refines mesh regions to allow for a favorable trade-off between computational speed and simulation accuracy. Classical methods for AMR depend on task-specific heuristics or expensive error estimators, hindering their use for complex simulations. Recent learned AMR methods tackle these problems, but so far scale only to simple toy examples. We formulate AMR as a novel Adaptive Swarm Markov Decision Process in which a mesh is modeled as a system of simple collaborating agents that may split into multiple new agents. This framework allows for a spatial reward formulation that simplifies the credit assignment problem, which we combine with Message Passing Networks to propagate information between neighboring mesh elements. We experimentally validate the effectiveness of our approach, Adaptive Swarm Mesh Refinement (ASMR), showing that it learns reliable, scalable, and efficient refinement strategies on a set of challenging problems. Our approach significantly speeds up computation, achieving up to 30-fold improvement compared to uniform refinements in complex simulations. Additionally, we outperform learned baselines and achieve a refinement quality that is on par with a traditional error-based AMR strategy without expensive oracle information about the error signal.
Swarm Reinforcement Learning for Adaptive Mesh Refinement
[ "Niklas Freymuth", "Philipp Dahlinger", "Tobias Daniel Würth", "Simon Reisch", "Luise Kärger", "Gerhard Neumann" ]
Conference
poster
2304.00818
[ "https://github.com/niklasfreymuth/asmr" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rY4sA9qYKy
@inproceedings{ joudaki2023on, title={On the impact of activation and normalization in obtaining isometric embeddings at initialization}, author={Amir Joudaki and Hadi Daneshmand and Francis Bach}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rY4sA9qYKy} }
In this paper, we explore the structure of the penultimate Gram matrix in deep neural networks, which contains the pairwise inner products of outputs corresponding to a batch of inputs. In several architectures it has been observed that this Gram matrix becomes degenerate with depth at initialization, which dramatically slows training. Normalization layers, such as batch or layer normalization, play a pivotal role in preventing the rank collapse issue. Despite promising advances, the existing theoretical results do not extend to layer normalization, which is widely used in transformers, and can not quantitatively characterize the role of non-linear activations. To bridge this gap, we prove that layer normalization, in conjunction with activation layers, biases the Gram matrix of a multilayer perceptron towards the identity matrix at an exponential rate with depth at initialization. We quantify this rate using the Hermite expansion of the activation function.
On the impact of activation and normalization in obtaining isometric embeddings at initialization
[ "Amir Joudaki", "Hadi Daneshmand", "Francis Bach" ]
Conference
poster
[ "https://github.com/ajoudaki/deepnet-isometry" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rW4mNcDxpS
@inproceedings{ ziesche2023wasserstein, title={Wasserstein Gradient Flows for Optimizing Gaussian Mixture Policies}, author={Hanna Ziesche and Leonel Rozo}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rW4mNcDxpS} }
Robots often rely on a repertoire of previously-learned motion policies for performing tasks of diverse complexities. When facing unseen task conditions or when new task requirements arise, robots must adapt their motion policies accordingly. In this context, policy optimization is the \emph{de facto} paradigm to adapt robot policies as a function of task-specific objectives. Most commonly-used motion policies carry particular structures that are often overlooked in policy optimization algorithms. We instead propose to leverage the structure of probabilistic policies by casting the policy optimization as an optimal transport problem. Specifically, we focus on robot motion policies that build on Gaussian mixture models (GMMs) and formulate the policy optimization as a Wassertein gradient flow over the GMMs space. This naturally allows us to constrain the policy updates via the $L^2$-Wasserstein distance between GMMs to enhance the stability of the policy optimization process. Furthermore, we leverage the geometry of the Bures-Wasserstein manifold to optimize the Gaussian distributions of the GMM policy via Riemannian optimization. We evaluate our approach on common robotic settings: Reaching motions, collision-avoidance behaviors, and multi-goal tasks. Our results show that our method outperforms common policy optimization baselines in terms of task success rate and low-variance solutions.
Wasserstein Gradient Flows for Optimizing Gaussian Mixture Policies
[ "Hanna Ziesche", "Leonel Rozo" ]
Conference
poster
2305.10411
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rUldfB4SPT
@inproceedings{ yang2023pred, title={{PRED}: Pre-training via Semantic Rendering on Li{DAR} Point Clouds}, author={Hao Yang and Haiyang Wang and Di Dai and Liwei Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rUldfB4SPT} }
Pre-training is crucial in 3D-related fields such as autonomous driving where point cloud annotation is costly and challenging. Many recent studies on point cloud pre-training, however, have overlooked the issue of incompleteness, where only a fraction of the points are captured by LiDAR, leading to ambiguity during the training phase. On the other hand, images offer more comprehensive information and richer semantics that can bolster point cloud encoders in addressing the incompleteness issue inherent in point clouds. Yet, incorporating images into point cloud pre-training presents its own challenges due to occlusions, potentially causing misalignments between points and pixels. In this work, we propose PRED, a novel image-assisted pre-training framework for outdoor point clouds in an occlusion-aware manner. The main ingredient of our framework is a Birds-Eye-View (BEV) feature map conditioned semantic rendering, leveraging the semantics of images for supervision through neural rendering. We further enhance our model's performance by incorporating point-wise masking with a high mask ratio (95%). Extensive experiments demonstrate PRED's superiority over prior point cloud pre-training methods, providing significant improvements on various large-scale datasets for 3D perception tasks. Codes will be available at https://github.com/PRED4pc/PRED.
PRED: Pre-training via Semantic Rendering on LiDAR Point Clouds
[ "Hao Yang", "Haiyang Wang", "Di Dai", "Liwei Wang" ]
Conference
poster
2311.04501
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rUf0GV5CuU
@inproceedings{ roy2023locality, title={Locality Sensitive Hashing in Fourier Frequency Domain For Soft Set Containment Search}, author={Indradyumna Roy and Rishi Agarwal and Soumen Chakrabarti and Anirban Dasgupta and Abir De}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rUf0GV5CuU} }
In many search applications related to passage retrieval, text entailment, and subgraph search, the query and each 'document' is a set of elements, with a document being relevant if it contains the query. These elements are not represented by atomic IDs, but by embedded representations, thereby extending set containment to *soft* set containment. Recent applications address soft set containment by encoding sets into fixed-size vectors and checking for elementwise *vector* *dominance*. This 0/1 property can be relaxed to an asymmetric *hinge* *distance* for scoring and ranking candidate documents. Here we focus on data-sensitive, trainable indices for fast retrieval of relevant documents. Existing LSH methods are designed for mostly symmetric or few simple asymmetric distance functions, which are not suitable for hinge distance. Instead, we transform hinge distance into a proposed *dominance* *similarity* measure, to which we then apply a Fourier transform, thereby expressing dominance similarity as an expectation of inner products of functions in the frequency domain. Next, we approximate the expectation with an importance-sampled estimate. The overall consequence is that now we can use a traditional LSH, but in the frequency domain. To ensure that the LSH uses hash bits efficiently, we learn hash functions that are sensitive to both corpus and query distributions, mapped to the frequency domain. Our experiments show that the proposed asymmetric dominance similarity is critical to the targeted applications, and that our LSH, which we call FourierHashNet, provides a better query time vs. retrieval quality trade-off, compared to several baselines. Both the Fourier transform and the trainable hash codes contribute to performance gains.
Locality Sensitive Hashing in Fourier Frequency Domain For Soft Set Containment Search
[ "Indradyumna Roy", "Rishi Agarwal", "Soumen Chakrabarti", "Anirban Dasgupta", "Abir De" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rUFckPrzXR
@inproceedings{ ko2023homotopybased, title={Homotopy-based training of Neural{ODE}s for accurate dynamics discovery}, author={Joon-Hyuk Ko and Hankyul Koh and Nojun Park and Wonho Jhe}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rUFckPrzXR} }
Neural Ordinary Differential Equations (NeuralODEs) present an attractive way to extract dynamical laws from time series data, as they bridge neural networks with the differential equation-based modeling paradigm of the physical sciences. However, these models often display long training times and suboptimal results, especially for longer duration data. While a common strategy in the literature imposes strong constraints to the NeuralODE architecture to inherently promote stable model dynamics, such methods are ill-suited for dynamics discovery as the unknown governing equation is not guaranteed to satisfy the assumed constraints. In this paper, we develop a new training method for NeuralODEs, based on synchronization and homotopy optimization, that does not require changes to the model architecture. We show that synchronizing the model dynamics and the training data tames the originally irregular loss landscape, which homotopy optimization can then leverage to enhance training. Through benchmark experiments, we demonstrate our method achieves competitive or better training loss while often requiring less than half the number of training epochs compared to other model-agnostic techniques. Furthermore, models trained with our method display better extrapolation capabilities, highlighting the effectiveness of our method.
Homotopy-based training of NeuralODEs for accurate dynamics discovery
[ "Joon-Hyuk Ko", "Hankyul Koh", "Nojun Park", "Wonho Jhe" ]
Conference
poster
2210.01407
[ "https://github.com/jhko725/neuralodehomotopy" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rQI3FOzo1f
@inproceedings{ shin2023efficient, title={Efficient Learning of Linear Graph Neural Networks via Node Subsampling}, author={Seiyun Shin and Ilan Shomorony and Han Zhao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rQI3FOzo1f} }
Graph Neural Networks (GNNs) are a powerful class of machine learning models with applications in recommender systems, drug discovery, social network analysis, and computer vision. One challenge with their implementation is that GNNs often take large-scale graphs as inputs, which imposes significant computational/storage costs in the training and testing phases. In particular, the message passing operations of a GNN require multiplication of the graph adjacency matrix $A \in \mathbb{R}^{n \times n}$ and the data matrix $X \in \mathbb{R}^{n \times d}$, and the $O(n^2 d)$ time complexity can be prohibitive for large $n$. Thus, a natural question is whether it is possible to perform the GNN operations in (quasi-)linear time by avoiding the full computation of $A X$. To study this question, we consider the setting of a regression task on a two-layer Linear Graph Convolutional Network (GCN). We develop an efficient training algorithm based on (1) performing node subsampling, (2) estimating the leverage scores of $A X$ based on the subsampled graph, and (3) performing leverage score sampling on $A X$. We show that our proposed scheme learns the regression model observing only $O(nd\epsilon^{-2}\log n)$ entries of $A$ in time $O(nd^2 \epsilon^{-2}\log n)$, with the guarantee that the learned weights deviate by at most $\epsilon$ under the $\ell_2$ norm from the model learned using the entire adjacency matrix $A$. We present empirical results for regression problems on real-world graphs and show that our algorithm significantly outperforms other baseline sampling strategies that exploit the same number of observations.
Efficient Learning of Linear Graph Neural Networks via Node Subsampling
[ "Seiyun Shin", "Ilan Shomorony", "Han Zhao" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rN99gLCBe4
@inproceedings{ suh2023continuoustime, title={Continuous-time Analysis of Anchor Acceleration}, author={Jaewook J. Suh and Jisun Park and Ernest K. Ryu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rN99gLCBe4} }
Recently, the anchor acceleration, an acceleration mechanism distinct from Nesterov's, has been discovered for minimax optimization and fixed-point problems, but its mechanism is not understood well, much less so than Nesterov acceleration. In this work, we analyze continuous-time models of anchor acceleration. We provide tight, unified analyses for characterizing the convergence rate as a function of the anchor coefficient $\beta(t)$, thereby providing insight into the anchor acceleration mechanism and its accelerated $\mathcal{O}(1/k^2)$-convergence rate. Finally, we present an adaptive method inspired by the continuous-time analyses and establish its effectiveness through theoretical analyses and experiments.
Continuous-time Analysis of Anchor Acceleration
[ "Jaewook J. Suh", "Jisun Park", "Ernest K. Ryu" ]
Conference
poster
2304.00771
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rLpLjCBW4J
@inproceedings{ jia2023preconditioning, title={Preconditioning Matters: Fast Global Convergence of Non-convex Matrix Factorization via Scaled Gradient Descent}, author={Xixi Jia and Hailin Wang and Jiangjun Peng and Xiangchu Feng and Deyu Meng}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rLpLjCBW4J} }
Low-rank matrix factorization (LRMF) is a canonical problem in non-convex optimization, the objective function to be minimized is non-convex and even non-smooth, which makes the global convergence guarantee of gradient-based algorithm quite challenging. Recent work made a breakthrough on proving that standard gradient descent converges to the $\varepsilon$-global minima after $O( \frac{d \kappa^2}{\tau^2} {\rm ln} \frac{d \sigma_d}{\tau} + \frac{d \kappa^2}{\tau^2} {\rm ln} \frac{\sigma_d}{\varepsilon})$ iterations from small initialization with a very small learning rate (both are related to the small constant $\tau$). While the dependence of the convergence on the \textit{condition number} $\kappa$ and \textit{small learning rate} makes it not practical especially for ill-conditioned LRMF problem. In this paper, we show that precondition helps in accelerating the convergence and prove that the scaled gradient descent (ScaledGD) and its variant, alternating scaled gradient descent (AltScaledGD) converge to an $\varepsilon$-global minima after $O( {\rm ln} \frac{d}{\delta} + {\rm ln} \frac{d}{\varepsilon})$ iterations from general random initialization. Meanwhile, for small initialization as in gradient descent, both ScaledGD and AltScaledGD converge to $\varepsilon$-global minima after only $O({\rm ln} \frac{d}{\varepsilon})$ iterations. Furthermore, we prove that as a proximity to the alternating minimization, AltScaledGD converges faster than ScaledGD, its global convergence does not rely on small learning rate and small initialization, which certificates the advantages of AltScaledGD in LRMF.
Preconditioning Matters: Fast Global Convergence of Non-convex Matrix Factorization via Scaled Gradient Descent
[ "Xixi Jia", "Hailin Wang", "Jiangjun Peng", "Xiangchu Feng", "Deyu Meng" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rJc5Lsn5QU
@inproceedings{ yao2023articd, title={{ARTIC}3D: Learning Robust Articulated 3D Shapes from Noisy Web Image Collections}, author={Chun-Han Yao and Amit Raj and Wei-Chih Hung and Michael Rubinstein and Yuanzhen Li and Ming-Hsuan Yang and Varun Jampani}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rJc5Lsn5QU} }
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging due to the ambiguities of camera viewpoint, pose, texture, lighting, etc. We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild. Specifically, ARTIC3D is built upon a skeleton-based surface representation and is further guided by 2D diffusion priors from Stable Diffusion. First, we enhance the input images with occlusions/truncation via 2D diffusion to obtain cleaner mask estimates and semantic features. Second, we perform diffusion-guided 3D optimization to estimate shape and texture that are of high-fidelity and faithful to input images. We also propose a novel technique to calculate more stable image-level gradients via diffusion models compared to existing alternatives. Finally, we produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations. Extensive evaluations on multiple existing datasets as well as newly introduced noisy web image collections with occlusions and truncation demonstrate that ARTIC3D outputs are more robust to noisy images, higher quality in terms of shape and texture details, and more realistic when animated.
ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image Collections
[ "Chun-Han Yao", "Amit Raj", "Wei-Chih Hung", "Michael Rubinstein", "Yuanzhen Li", "Ming-Hsuan Yang", "Varun Jampani" ]
Conference
poster
2306.04619
[ "" ]
https://huggingface.co/papers/2306.04619
3
4
0
7
1
[]
[]
[]
null
https://openreview.net/forum?id=rHAX0LRwk8
@inproceedings{ chen2023adversarial, title={Adversarial Counterfactual Environment Model Learning}, author={Xiong-Hui Chen and Yang Yu and Zhengmao Zhu and ZhiHua Yu and Chen Zhenjun and Chenghe Wang and Yinan Wu and Rong-Jun Qin and Hongqiu Wu and Ruijin Ding and Huang Fangsheng}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rHAX0LRwk8} }
An accurate environment dynamics model is crucial for various downstream tasks in sequential decision-making, such as counterfactual prediction, off-policy evaluation, and offline reinforcement learning. Currently, these models were learned through empirical risk minimization (ERM) by step-wise fitting of historical transition data. This way was previously believed unreliable over long-horizon rollouts because of the compounding errors, which can lead to uncontrollable inaccuracies in predictions. In this paper, we find that the challenge extends beyond just long-term prediction errors: we reveal that even when planning with one step, learned dynamics models can also perform poorly due to the selection bias of behavior policies during data collection. This issue will significantly mislead the policy optimization process even in identifying single-step optimal actions, further leading to a greater risk in sequential decision-making scenarios. To tackle this problem, we introduce a novel model-learning objective called adversarial weighted empirical risk minimization (AWRM). AWRM incorporates an adversarial policy that exploits the model to generate a data distribution that weakens the model's prediction accuracy, and subsequently, the model is learned under this adversarial data distribution. We implement a practical algorithm, GALILEO, for AWRM and evaluate it on two synthetic tasks, three continuous-control tasks, and \textit{a real-world application}. The experiments demonstrate that GALILEO can accurately predict counterfactual actions and improve various downstream tasks, including offline policy evaluation and improvement, as well as online decision-making.
Adversarial Counterfactual Environment Model Learning
[ "Xiong-Hui Chen", "Yang Yu", "Zhengmao Zhu", "ZhiHua Yu", "Chen Zhenjun", "Chenghe Wang", "Yinan Wu", "Rong-Jun Qin", "Hongqiu Wu", "Ruijin Ding", "Huang Fangsheng" ]
Conference
spotlight
2206.04890
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rGN3X9jnEg
@inproceedings{ bredenberg2023formalizing, title={Formalizing locality for normative synaptic plasticity models}, author={Colin Bredenberg and Ezekiel Williams and Cristina Savin and Blake Aaron Richards and Guillaume Lajoie}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rGN3X9jnEg} }
In recent years, many researchers have proposed new models for synaptic plasticity in the brain based on principles of machine learning. The central motivation has been the development of learning algorithms that are able to learn difficult tasks while qualifying as "biologically plausible". However, the concept of a biologically plausible learning algorithm is only heuristically defined as an algorithm that is potentially implementable by biological neural networks. Further, claims that neural circuits could implement any given algorithm typically rest on an amorphous concept of "locality" (both in space and time). As a result, it is unclear what many proposed local learning algorithms actually predict biologically, and which of these are consequently good candidates for experimental investigation. Here, we address this lack of clarity by proposing formal and operational definitions of locality. Specifically, we define different classes of locality, each of which makes clear what quantities cannot be included in a learning rule if an algorithm is to qualify as local with respect to a given (biological) constraint. We subsequently use this framework to distill testable predictions from various classes of biologically plausible synaptic plasticity models that are robust to arbitrary choices about neural network architecture. Therefore, our framework can be used to guide claims of biological plausibility and to identify potential means of experimentally falsifying a proposed learning algorithm for the brain.
Formalizing locality for normative synaptic plasticity models
[ "Colin Bredenberg", "Ezekiel Williams", "Cristina Savin", "Blake Aaron Richards", "Guillaume Lajoie" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rG1M3kOVba
@inproceedings{ wang2023fluid, title={{FL}u{ID}: Mitigating Stragglers in Federated Learning using Invariant Dropout}, author={Irene Wang and Prashant J. Nair and Divya Mahajan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rG1M3kOVba} }
Federated Learning (FL) allows machine learning models to train locally on individual mobile devices, synchronizing model updates via a shared server. This approach safeguards user privacy; however, it also generates a heterogeneous training environment due to the varying performance capabilities across devices. As a result, “straggler” devices with lower performance often dictate the overall training time in FL. In this work, we aim to alleviate this performance bottleneck due to stragglers by dynamically balancing the training load across the system. We introduce Invariant Dropout, a method that extracts a sub-model based on the weight update threshold, thereby minimizing potential impacts on accuracy. Building on this dropout technique, we develop an adaptive training framework, Federated Learning using Invariant Dropout (FLuID). FLuID offers a lightweight sub-model extraction to regulate computational intensity, thereby reducing the load on straggler devices without affecting model quality. Our method leverages neuron updates from non-straggler devices to construct a tailored sub-model for each straggler based on client performance profiling. Furthermore, FLuID can dynamically adapt to changes in stragglers as runtime conditions shift. We evaluate FLuID using five real-world mobile clients. The evaluations show that Invariant Dropout maintains baseline model efficiency while alleviating the performance bottleneck of stragglers through a dynamic, runtime approach.
FLuID: Mitigating Stragglers in Federated Learning using Invariant Dropout
[ "Irene Wang", "Prashant J. Nair", "Divya Mahajan" ]
Conference
poster
2307.02623
[ "https://github.com/iwang05/fluid" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rDiMgZulwi
@inproceedings{ li2023learning, title={Learning better with Dale{\textquoteright}s Law: A Spectral Perspective}, author={Pingsheng Li and Jonathan Cornford and Arna Ghosh and Blake Aaron Richards}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=rDiMgZulwi} }
Most recurrent neural networks (RNNs) do not include a fundamental constraint of real neural circuits: Dale's Law, which implies that neurons must be excitatory (E) or inhibitory (I). Dale's Law is generally absent from RNNs because simply partitioning a standard network's units into E and I populations impairs learning. However, here we extend a recent feedforward bio-inspired EI network architecture, named Dale's ANNs, to recurrent networks, and demonstrate that good performance is possible while respecting Dale's Law. This begs the question: What makes some forms of EI network learn poorly and others learn well? And, why does the simple approach of incorporating Dale's Law impair learning? Historically the answer was thought to be the sign constraints on EI network parameters, and this was a motivation behind Dale's ANNs. However, here we show the spectral properties of the recurrent weight matrix at initialisation are more impactful on network performance than sign constraints. We find that simple EI partitioning results in a singular value distribution that is multimodal and dispersed, whereas standard RNNs have an unimodal, more clustered singular value distribution, as do recurrent Dale's ANNs. We also show that the spectral properties and performance of partitioned EI networks are worse for small networks with fewer I units, and we present normalised SVD entropy as a measure of spectrum pathology that correlates with performance. Overall, this work sheds light on a long-standing mystery in neuroscience-inspired AI and computational neuroscience, paving the way for greater alignment between neural networks and biology.
Learning better with Dale’s Law: A Spectral Perspective
[ "Pingsheng Li", "Jonathan Cornford", "Arna Ghosh", "Blake Aaron Richards" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=r9fzp8eyhZ
@inproceedings{ zhuang2023learning, title={Learning Invariant Molecular Representation in Latent Discrete Space}, author={Xiang Zhuang and Qiang Zhang and Keyan Ding and Yatao Bian and Xiao Wang and Jingsong Lv and Hongyang Chen and Huajun Chen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=r9fzp8eyhZ} }
Molecular representation learning lays the foundation for drug discovery. However, existing methods suffer from poor out-of-distribution (OOD) generalization, particularly when data for training and testing originate from different environments. To address this issue, we propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts. Specifically, we propose a strategy called ``first-encoding-then-separation'' to identify invariant molecule features in the latent space, which deviates from conventional practices. Prior to the separation step, we introduce a residual vector quantization module that mitigates the over-fitting to training data distributions while preserving the expressivity of encoders. Furthermore, we design a task-agnostic self-supervised learning objective to encourage precise invariance identification, which enables our method widely applicable to a variety of tasks, such as regression and multi-label classification. Extensive experiments on 18 real-world molecular datasets demonstrate that our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts. Our code is available at https://github.com/HICAI-ZJU/iMoLD.
Learning Invariant Molecular Representation in Latent Discrete Space
[ "Xiang Zhuang", "Qiang Zhang", "Keyan Ding", "Yatao Bian", "Xiao Wang", "Jingsong Lv", "Hongyang Chen", "Huajun Chen" ]
Conference
poster
2310.14170
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=r9eZH6WNm2
@inproceedings{ huang2023learning, title={Learning to Group Auxiliary Datasets for Molecule}, author={Tinglin Huang and Ziniu Hu and Zhitao Ying}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=r9eZH6WNm2} }
The limited availability of annotations in small molecule datasets presents a challenge to machine learning models. To address this, one common strategy is to collaborate with additional auxiliary datasets. However, having more data does not always guarantee improvements. Negative transfer can occur when the knowledge in the target dataset differs or contradicts that of the auxiliary molecule datasets. In light of this, identifying the auxiliary molecule datasets that can benefit the target dataset when jointly trained remains a critical and unresolved problem. Through an empirical analysis, we observe that combining graph structure similarity and task similarity can serve as a more reliable indicator for identifying high-affinity auxiliary datasets. Motivated by this insight, we propose MolGroup, which separates the dataset affinity into task and structure affinity to predict the potential benefits of each auxiliary molecule dataset. MolGroup achieves this by utilizing a routing mechanism optimized through a bi-level optimization framework. Empowered by the meta gradient, the routing mechanism is optimized toward maximizing the target dataset's performance and quantifies the affinity as the gating score. As a result, MolGroup is capable of predicting the optimal combination of auxiliary datasets for each target dataset. Our extensive experiments demonstrate the efficiency and effectiveness of MolGroup, showing an average improvement of 4.41%/3.47% for GIN/Graphormer trained with the group of molecule datasets selected by MolGroup on 11 target molecule datasets.
Learning to Group Auxiliary Datasets for Molecule
[ "Tinglin Huang", "Ziniu Hu", "Zhitao Ying" ]
Conference
poster
2307.04052
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=r8snfquzs3
@inproceedings{ wang2023grounding, title={Grounding Neural Inference with Satisfiability Modulo Theories}, author={Zifan Wang and Saranya Vijayakumar and Kaiji Lu and Vijay Ganesh and Somesh Jha and Matt Fredrikson}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=r8snfquzs3} }
Recent techniques that integrate solver layers into Deep Neural Networks (DNNs) have shown promise in bridging a long-standing gap between inductive learning and symbolic reasoning techniques. In this paper we present a set of techniques for integrating Satisfiability Modulo Theories (SMT) solvers into the forward and backward passes of a deep network layer, called SMTLayer. Using this approach, one can encode rich domain knowledge into the network in the form of mathematical formulas. In the forward pass, the solver uses symbols produced by prior layers, along with these formulas, to construct inferences; in the backward pass, the solver informs updates to the network, driving it towards representations that are compatible with the solver's theory. Notably, the solver need not be differentiable. We implement SMTLayer as a Pytorch module, and our empirical results show that it leads to models that 1) require fewer training samples than conventional models, 2) that are robust to certain types of covariate shift, and 3) that ultimately learn representations that are consistent with symbolic knowledge, and thus naturally interpretable.
Grounding Neural Inference with Satisfiability Modulo Theories
[ "Zifan Wang", "Saranya Vijayakumar", "Kaiji Lu", "Vijay Ganesh", "Somesh Jha", "Matt Fredrikson" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=r8LYNleLf9
@inproceedings{ chen2023texq, title={TexQ: Zero-shot Network Quantization with Texture Feature Distribution Calibration}, author={Xinrui Chen and Yizhi Wang and Renao Yan and Yiqing Liu and Tian Guan and Yonghong He}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=r8LYNleLf9} }
Quantization is an effective way to compress neural networks. By reducing the bit width of the parameters, the processing efficiency of neural network models at edge devices can be notably improved. Most conventional quantization methods utilize real datasets to optimize quantization parameters and fine-tune. Due to the inevitable privacy and security issues of real samples, the existing real-data-driven methods are no longer applicable. Thus, a natural method is to introduce synthetic samples for zero-shot quantization (ZSQ). However, the conventional synthetic samples fail to retain the detailed texture feature distributions, which severely limits the knowledge transfer and performance of the quantized model. In this paper, a novel ZSQ method, TexQ is proposed to address this issue. We first synthesize a calibration image and extract its calibration center for each class with a texture feature energy distribution calibration method. Then, the calibration centers are used to guide the generator to synthesize samples. Finally, we introduce the mixup knowledge distillation module to diversify synthetic samples for fine-tuning. Extensive experiments on CIFAR10/100 and ImageNet show that TexQ is observed to perform state-of-the-art in ultra-low bit width quantization. For example, when ResNet-18 is quantized to 3-bit, TexQ achieves a 12.18% top-1 accuracy increase on ImageNet compared to state-of-the-art methods. Code at https://github.com/dangsingrue/TexQ.
TexQ: Zero-shot Network Quantization with Texture Feature Distribution Calibration
[ "Xinrui Chen", "Yizhi Wang", "Renao Yan", "Yiqing Liu", "Tian Guan", "Yonghong He" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=r7g9nFsulw
@inproceedings{ wang2023learning, title={Learning Adaptive Tensorial Density Fields for Clean Cryo-{ET} Reconstruction}, author={YUANHAO WANG and Ramzi Idoughi and Wolfgang Heidrich}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=r7g9nFsulw} }
We present a novel learning-based framework for reconstructing 3D structures from tilt-series cryo-Electron Tomography (cryo-ET) data. Cryo-ET is a powerful imaging technique that can achieve near-atomic resolutions. Still, it suffers from challenges such as missing-wedge acquisition, large data size, and high noise levels. Our framework addresses these challenges by using an adaptive tensorial-based representation for the 3D density field of the scanned sample. First, we optimize a quadtree structure to partition the volume of interest. Then, we learn a vector-matrix factorization of the tensor representing the density field in each node. Moreover, we use a loss function that combines a differentiable tomographic formation model with three regularization terms: total variation, boundary consistency constraint, and an isotropic Fourier prior. Our framework allows us to query the density at any location using the learned representation and obtain a high-quality 3D tomogram. We demonstrate the superiority of our framework over existing methods using synthetic and real data. Thus, our framework boosts the quality of the reconstruction while reducing the computation time and the memory footprint. The code is available at https://github.com/yuanhaowang1213/adaptivetensordf.
Learning Adaptive Tensorial Density Fields for Clean Cryo-ET Reconstruction
[ "YUANHAO WANG", "Ramzi Idoughi", "Wolfgang Heidrich" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=r6xGZ0XL2g
@inproceedings{ khodak2023metalearning, title={Meta-Learning Adversarial Bandit Algorithms}, author={Mikhail Khodak and Ilya Osadchiy and Keegan Harris and Nina Balcan and Kfir Yehuda Levy and Ron Meir and Steven Wu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=r6xGZ0XL2g} }
We study online meta-learning with bandit feedback, with the goal of improving performance across multiple tasks if they are similar according to some natural similarity measure. As the first to target the adversarial online-within-online partial-information setting, we design meta-algorithms that combine outer learners to simultaneously tune the initialization and other hyperparameters of an inner learner for two important cases: multi-armed bandits (MAB) and bandit linear optimization (BLO). For MAB, the meta-learners initialize and set hyperparameters of the Tsallis-entropy generalization of Exp3, with the task-averaged regret improving if the entropy of the optima-in-hindsight is small. For BLO, we learn to initialize and tune online mirror descent (OMD) with self-concordant barrier regularizers, showing that task-averaged regret varies directly with an action space-dependent measure they induce. Our guarantees rely on proving that unregularized follow-the-leader combined with two levels of low-dimensional hyperparameter tuning is enough to learn a sequence of affine functions of non-Lipschitz and sometimes non-convex Bregman divergences bounding the regret of OMD.
Meta-Learning Adversarial Bandit Algorithms
[ "Mikhail Khodak", "Ilya Osadchiy", "Keegan Harris", "Nina Balcan", "Kfir Yehuda Levy", "Ron Meir", "Steven Wu" ]
Conference
poster
2307.02295
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qyixBZl8Ph
@inproceedings{ aketi2023global, title={Global Update Tracking: A Decentralized Learning Algorithm for Heterogeneous Data}, author={Sai Aparna Aketi and Abolfazl Hashemi and Kaushik Roy}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qyixBZl8Ph} }
Decentralized learning enables the training of deep learning models over large distributed datasets generated at different locations, without the need for a central server. However, in practical scenarios, the data distribution across these devices can be significantly different, leading to a degradation in model performance. In this paper, we focus on designing a decentralized learning algorithm that is less susceptible to variations in data distribution across devices. We propose Global Update Tracking (GUT), a novel tracking-based method that aims to mitigate the impact of heterogeneous data in decentralized learning without introducing any communication overhead. We demonstrate the effectiveness of the proposed technique through an exhaustive set of experiments on various Computer Vision datasets (CIFAR-10, CIFAR-100, Fashion MNIST, and ImageNette), model architectures, and network topologies. Our experiments show that the proposed method achieves state-of-the-art performance for decentralized learning on heterogeneous data via a 1-6% improvement in test accuracy compared to other existing techniques.
Global Update Tracking: A Decentralized Learning Algorithm for Heterogeneous Data
[ "Sai Aparna Aketi", "Abolfazl Hashemi", "Kaushik Roy" ]
Conference
poster
2305.04792
[ "https://github.com/aparna-aketi/global_update_tracking" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qyEm4tF2p1
@inproceedings{ zharmagambetov2023landscape, title={Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information}, author={Arman Zharmagambetov and Brandon Amos and Aaron M Ferber and Taoan Huang and Bistra Dilkina and Yuandong Tian}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qyEm4tF2p1} }
Recent works in learning-integrated optimization have shown promise in settings where the optimization problem is only partially observed or where general-purpose optimizers perform poorly without expert tuning. By learning an optimizer $\mathbf{g}$ to tackle these challenging problems with $f$ as the objective, the optimization process can be substantially accelerated by leveraging past experience. The optimizer can be trained with supervision from known optimal solutions or implicitly by optimizing the compound function $f\circ \mathbf{g}$. The implicit approach may not require optimal solutions as labels and is capable of handling problem uncertainty; however, it is slow to train and deploy due to frequent calls to optimizer $\mathbf{g}$ during both training and testing. The training is further challenged by sparse gradients of $\mathbf{g}$, especially for combinatorial solvers. To address these challenges, we propose using a smooth and learnable **Landscape Surrogate** $\mathcal{M}$ as a replacement for $f\circ \mathbf{g}$. This surrogate, learnable by neural networks, can be computed faster than the solver $\mathbf{g}$, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization. We test our approach on both synthetic problems, including shortest path and multidimensional knapsack, and real-world problems such as portfolio optimization, achieving comparable or superior objective values compared to state-of-the-art baselines while reducing the number of calls to $\mathbf{g}$. Notably, our approach outperforms existing methods for computationally expensive high-dimensional problems.
Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information
[ "Arman Zharmagambetov", "Brandon Amos", "Aaron M Ferber", "Taoan Huang", "Bistra Dilkina", "Yuandong Tian" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qy07OHsJT5
@inproceedings{ shi2023diffusion, title={Diffusion Schr\"odinger Bridge Matching}, author={Yuyang Shi and Valentin De Bortoli and Andrew Campbell and Arnaud Doucet}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qy07OHsJT5} }
Solving transport problems, i.e. finding a map transporting one given distribution to another, has numerous applications in machine learning. Novel mass transport methods motivated by generative modeling have recently been proposed, e.g. Denoising Diffusion Models (DDMs) and Flow Matching Models (FMMs) implement such a transport through a Stochastic Differential Equation (SDE) or an Ordinary Differential Equation (ODE). However, while it is desirable in many applications to approximate the deterministic dynamic Optimal Transport (OT) map which admits attractive properties, DDMs and FMMs are not guaranteed to provide transports close to the OT map. In contrast, Schrödinger bridges (SBs) compute stochastic dynamic mappings which recover entropy-regularized versions of OT. Unfortunately, existing numerical methods approximating SBs either scale poorly with dimension or accumulate errors across iterations. In this work, we introduce Iterative Markovian Fitting (IMF), a new methodology for solving SB problems, and Diffusion Schrödinger Bridge Matching (DSBM), a novel numerical algorithm for computing IMF iterates. DSBM significantly improves over previous SB numerics and recovers as special/limiting cases various recent transport methods. We demonstrate the performance of DSBM on a variety of problems.
Diffusion Schrödinger Bridge Matching
[ "Yuyang Shi", "Valentin De Bortoli", "Andrew Campbell", "Arnaud Doucet" ]
Conference
poster
2303.16852
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qxF8Pge6vM
@inproceedings{ saanum2023reinforcement, title={Reinforcement Learning with Simple Sequence Priors}, author={Tankred Saanum and Noemi Elteto and Peter Dayan and Marcel Binz and Eric Schulz}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qxF8Pge6vM} }
In reinforcement learning (RL), simplicity is typically quantified on an action-by-action basis -- but this timescale ignores temporal regularities, like repetitions, often present in sequential strategies. We therefore propose an RL algorithm that learns to solve tasks with sequences of actions that are compressible. We explore two possible sources of simple action sequences: Sequences that can be learned by autoregressive models, and sequences that are compressible with off-the-shelf data compression algorithms. Distilling these preferences into sequence priors, we derive a novel information-theoretic objective that incentivizes agents to learn policies that maximize rewards while conforming to these priors. We show that the resulting RL algorithm leads to faster learning, and attains higher returns than state-of-the-art model-free approaches in a series of continuous control tasks from the DeepMind Control Suite. These priors also produce a powerful information-regularized agent that is robust to noisy observations and can perform open-loop control.
Reinforcement Learning with Simple Sequence Priors
[ "Tankred Saanum", "Noemi Elteto", "Peter Dayan", "Marcel Binz", "Eric Schulz" ]
Conference
poster
2305.17109
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qv5UZJTNda
@inproceedings{ xu2023multimodal, title={Multimodal Deep Learning Model Unveils Behavioral Dynamics of V1 Activity in Freely Moving Mice}, author={Aiwen Xu and Yuchen Hou and Cris M. Niell and Michael Beyeler}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qv5UZJTNda} }
Despite their immense success as a model of macaque visual cortex, deep convolutional neural networks (CNNs) have struggled to predict activity in visual cortex of the mouse, which is thought to be strongly dependent on the animal’s behavioral state. Furthermore, most computational models focus on predicting neural responses to static images presented under head fixation, which are dramatically different from the dynamic, continuous visual stimuli that arise during movement in the real world. Consequently, it is still unknown how natural visual input and different behavioral variables may integrate over time to generate responses in primary visual cortex (V1). To address this, we introduce a multimodal recurrent neural network that integrates gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice. We show that the model achieves state-of-the-art predictions of V1 activity during free exploration and demonstrate the importance of each component in an extensive ablation study. Analyzing our model using maximally activating stimuli and saliency maps, we reveal new insights into cortical function, including the prevalence of mixed selectivity for behavioral variables in mouse V1. In summary, our model offers a comprehensive deep-learning framework for exploring the computational principles underlying V1 neurons in freely-moving animals engaged in natural behavior.
Multimodal Deep Learning Model Unveils Behavioral Dynamics of V1 Activity in Freely Moving Mice
[ "Aiwen Xu", "Yuchen Hou", "Cris M. Niell", "Michael Beyeler" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qumBHr77ht
@inproceedings{ makur2023on, title={On the Robustness of Mechanism Design under Total Variation Distance}, author={Anuran Makur and Marios Mertzanidis and Alexandros Psomas and Athina Terzoglou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qumBHr77ht} }
We study the problem of designing mechanisms when agents' valuation functions are drawn from unknown and correlated prior distributions. In particular, we are given a prior distribution $D$, and we are interested in designing a (truthful) mechanism that has good performance for all "true distributions" that are close to $D$ in Total Variation (TV) distance. We show that DSIC and BIC mechanisms in this setting are strongly robust with respect to TV distance, for any bounded objective function $\mathcal{O}$, extending a recent result of Brustle et al. ([BCD20], EC 2020). At the heart of our result is a fundamental duality property of total variation distance. As direct applications of our result, we (i) demonstrate how to find approximately revenue-optimal and approximately BIC mechanisms for weakly dependent prior distributions; (ii) show how to find correlation-robust mechanisms when only ``noisy'' versions of marginals are accessible, extending recent results of Bei et. al. ([BGLT19], SODA 2019); (iii) prove that prophet-inequality type guarantees are preserved for correlated priors, recovering a variant of a result of D{\"u}tting and Kesselheim ([DK19], EC 2019) as a special case; (iv) give a new necessary condition for a correlated distribution to witness an infinite separation in revenue between simple and optimal mechanisms, complementing recent results of Psomas et al. ([PSCW22], NeurIPS 2022); (v) give a new condition for simple mechanisms to approximate revenue-optimal mechanisms for the case of a single agent whose type is drawn from a correlated distribution that can be captured by a Markov Random Field, complementing recent results of Cai and Oikonomou ([CO21], EC 2021).
On the Robustness of Mechanism Design under Total Variation Distance
[ "Anuran Makur", "Marios Mertzanidis", "Alexandros Psomas", "Athina Terzoglou" ]
Conference
poster
2310.07809
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=quMBEd27x9
@inproceedings{ kang2023performance, title={Performance Scaling via Optimal Transport: Enabling Data Selection from Partially Revealed Sources}, author={Feiyang Kang and Hoang Anh Just and Anit Kumar Sahu and Ruoxi Jia}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=quMBEd27x9} }
Traditionally, data selection has been studied in settings where all samples from prospective sources are fully revealed to a machine learning developer. However, in practical data exchange scenarios, data providers often reveal only a limited subset of samples before an acquisition decision is made. Recently, there have been efforts to fit scaling functions that predict model performance at any *size and data source composition* using the limited available samples. However, these scaling functions are usually black-box, computationally expensive to fit, highly susceptible to overfitting, or/and difficult to optimize for data selection. This paper proposes a framework called *<projektor>*, which predicts model performance and supports data selection decisions based on partial samples of prospective data sources. Our approach distinguishes itself from existing work by introducing a novel *two-stage* performance inference process. In the first stage, we leverage the Optimal Transport distance to predict the model's performance for any data mixture ratio within the range of disclosed data sizes. In the second stage, we extrapolate the performance to larger undisclosed data sizes based on a novel parameter-free mapping technique inspired by neural scaling laws. We further derive an efficient gradient-based method to select data sources based on the projected model performance. Evaluation over a diverse range of applications (e.g., vision, text, fine-tuning, noisy data sources, etc.) demonstrates that *<projektor>* significantly improves existing performance scaling approaches in terms of both the accuracy of performance inference and the computation costs associated with constructing the performance predictor. Also, *<projektor>* outperforms by a wide margin in data selection effectiveness compared to a range of other off-the-shelf solutions. We provide *<projektor>* an open-source toolkit.
Performance Scaling via Optimal Transport: Enabling Data Selection from Partially Revealed Sources
[ "Feiyang Kang", "Hoang Anh Just", "Anit Kumar Sahu", "Ruoxi Jia" ]
Conference
poster
2307.02460
[ "" ]
https://huggingface.co/papers/2307.02460
2
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=qs4swxtIAQ
@inproceedings{ gulati2023tabmt, title={Tab{MT}: Generating tabular data with masked transformers}, author={Manbir S Gulati and Paul F Roysdon}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qs4swxtIAQ} }
Autoregressive and Masked Transformers are incredibly effective as generative models and classifiers. While these models are most prevalent in NLP, they also exhibit strong performance in other domains, such as vision. This work contributes to the exploration of transformer-based models in synthetic data generation for diverse application domains. In this paper, we present TabMT, a novel Masked Transformer design for generating synthetic tabular data. TabMT effectively addresses the unique challenges posed by heterogeneous data fields and is natively able to handle missing data. Our design leverages improved masking techniques to allow for generation and demonstrates state-of-the-art performance from extremely small to extremely large tabular datasets. We evaluate TabMT for privacy-focused applications and find that it is able to generate high quality data with superior privacy tradeoffs.
TabMT: Generating tabular data with masked transformers
[ "Manbir S Gulati", "Paul F Roysdon" ]
Conference
poster
2312.06089
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qqcIM8NiiB
@inproceedings{ gu2023photoswap, title={{PHOTOSWAP}: Personalized Subject Swapping in Images}, author={Jing Gu and Yilin Wang and Nanxuan Zhao and Tsu-Jui Fu and Wei Xiong and Qing Liu and Zhifei Zhang and HE Zhang and Jianming Zhang and HyunJoon Jung and Xin Eric Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qqcIM8NiiB} }
In an era where images and visual content dominate our digital landscape, the ability to manipulate and personalize these images has become a necessity. Envision seamlessly substituting a tabby cat lounging on a sunlit window sill in a photograph with your own playful puppy, all while preserving the original charm and composition of the image. We present \emph{Photoswap}, a novel approach that enables this immersive image editing experience through personalized subject swapping in existing images. \emph{Photoswap} first learns the visual concept of the subject from reference images and then swaps it into the target image using pre-trained diffusion models in a training-free manner. We establish that a well-conceptualized visual subject can be seamlessly transferred to any image with appropriate self-attention and cross-attention manipulation, maintaining the pose of the swapped subject and the overall coherence of the image. Comprehensive experiments underscore the efficacy and controllability of \emph{Photoswap} in personalized subject swapping. Furthermore, \emph{Photoswap} significantly outperforms baseline methods in human ratings across subject swapping, background preservation, and overall quality, revealing its vast application potential, from entertainment to professional editing.
PHOTOSWAP: Personalized Subject Swapping in Images
[ "Jing Gu", "Yilin Wang", "Nanxuan Zhao", "Tsu-Jui Fu", "Wei Xiong", "Qing Liu", "Zhifei Zhang", "HE Zhang", "Jianming Zhang", "HyunJoon Jung", "Xin Eric Wang" ]
Conference
poster
2305.18286
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qptO6YDZEP
@inproceedings{ kacham2023lower, title={Lower Bounds on Adaptive Sensing for Matrix Recovery}, author={Praneeth Kacham and David Woodruff}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qptO6YDZEP} }
We study lower bounds on adaptive sensing algorithms for recovering low rank matrices using linear measurements. Given an $n \times n$ matrix $A$, a general linear measurement $S(A)$, for an $n \times n$ matrix $S$, is just the inner product of $S$ and $A$, each treated as $n^2$-dimensional vectors. By performing as few linear measurements as possible on a rank-$r$ matrix $A$, we hope to construct a matrix $\hat{A}$ that satisfies $|A - \hat{A}|\_F^2 \le c |A|\_F^2$, for a small constant $c$. Here $|A|\_F$ denotes the Frobenius norm $(\sum_{i,j} A_{i,j}^2)^{1/2}$. It is commonly assumed that when measuring $A$ with $S$, the response is corrupted with an independent Gaussian random variable of mean $0$ and variance $\sigma^2$. Candès and Plan (IEEE Trans. Inform. Theory 2011) study non-adaptive algorithms for low rank matrix recovery using random linear measurements. They use the restricted isometry property (RIP) of Random Gaussian Matrices to give tractable algorithms to estimate $A$ from the measurements. At the edge of the noise level where recovery is information-theoretically feasible, it is known that their non-adaptive algorithms need to perform $\Omega(n^2)$ measurements, which amounts to reading the entire matrix. An important question is whether adaptivity helps in decreasing the overall number of measurements. While for the related problem of sparse recovery, adaptive algorithms have been extensively studied, as far as we are aware adaptive algorithms and lower bounds on them seem largely unexplored for matrix recovery. We show that any adaptive algorithm that uses $k$ linear measurements in each round and outputs an approximation as in (1) with probability $\ge 9/10$ must run for $t = \Omega(\log(n^2/k)/\log\log n)$ rounds. Our lower bound shows that any adaptive algorithm which uses $n^{2-\beta}$ ($\beta > 0$ is arbitrary constant) linear measurements in each round must run for $\Omega(\log n/\log\log n)$ rounds. Our techniques also readily extend to obtain lower bounds on adaptive algorithms for tensor recovery. Our hard distribution also allows us to give a measurement-vs-rounds trade-off for many sensing problems in numerical linear algebra, such as spectral norm low rank approximation, Frobenius norm low rank approximation, singular vector approximation, and more.
Lower Bounds on Adaptive Sensing for Matrix Recovery
[ "Praneeth Kacham", "David Woodruff" ]
Conference
poster
2311.17281
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qlnlamFQEa
@inproceedings{ sun2023aligning, title={Aligning Synthetic Medical Images with Clinical Knowledge using Human Feedback}, author={Shenghuan Sun and Gregory Goldgof and Atul Butte and Ahmed Alaa}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qlnlamFQEa} }
Generative models capable of precisely capturing nuanced clinical features in medical images hold great promise for facilitating clinical data sharing, enhancing rare disease datasets, and efficiently synthesizing (annotated) medical images at scale. Despite their potential, assessing the quality of synthetic medical images remains a challenge. While modern generative models can synthesize visually-realistic medical images, the clinical plausibility of these images may be called into question. Domain-agnostic scores, such as FID score, precision, and recall, cannot incorporate clinical knowledge and are, therefore, not suitable for assessing clinical sensibility. Additionally, there are numerous unpredictable ways in which generative models may fail to synthesize clinically plausible images, making it challenging to anticipate potential failures and design automated scores for their detection. To address these challenges, this paper introduces a pathologist-in-the-loop framework for generating clinically-plausible synthetic medical images. Our framework comprises three steps: (1) pretraining a conditional diffusion model to generate medical images conditioned on a clinical concept, (2) expert pathologist evaluation of the generated images to assess whether they satisfy clinical desiderata, and (3) training a reward model that predicts human feedback on new samples, which we use to incorporate expert knowledge into the finetuning objective of the diffusion model. Our results show that human feedback significantly improves the quality of synthetic images in terms of fidelity, diversity, utility in downstream applications, and plausibility as evaluated by experts. We also demonstrate that human feedback can teach the model new clinical concepts not annotated in the original training data. Our results demonstrate the value of incorporating human feedback in clinical applications where generative models may struggle to capture extensive domain knowledge from raw data alone.
Aligning Synthetic Medical Images with Clinical Knowledge using Human Feedback
[ "Shenghuan Sun", "Gregory Goldgof", "Atul Butte", "Ahmed Alaa" ]
Conference
spotlight
2306.12438
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qlJoo2y3gY
@inproceedings{ liu2023bayesian, title={Bayesian nonparametric (non-)renewal processes for analyzing neural spike train variability}, author={David Liu and M{\'a}t{\'e} Lengyel}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qlJoo2y3gY} }
Neural spiking activity is generally variable, non-stationary, and exhibits complex dependencies on covariates, such as sensory input or behavior. These dependencies have been proposed to be signatures of specific computations, and so characterizing them with quantitative rigor is critical for understanding neural computations. Approaches based on point processes provide a principled statistical framework for modeling neural spiking activity. However, currently, they only allow the instantaneous mean, but not the instantaneous variability, of responses to depend on covariates. To resolve this limitation, we propose a scalable Bayesian approach generalizing modulated renewal processes using sparse variational Gaussian processes. We leverage pathwise conditioning for computing nonparametric priors over conditional interspike interval distributions and rely on automatic relevance determination to detect lagging interspike interval dependencies beyond renewal order. After systematically validating our method on synthetic data, we apply it to two foundational datasets of animal navigation: head direction cells in freely moving mice and hippocampal place cells in rats running along a linear track. Our model exhibits competitive or better predictive power compared to state-of-the-art baselines, and outperforms them in terms of capturing interspike interval statistics. These results confirm the importance of modeling covariate-dependent spiking variability, and further analyses of our fitted models reveal rich patterns of variability modulation beyond the temporal resolution of flexible count-based approaches.
Bayesian nonparametric (non-)renewal processes for analyzing neural spike train variability
[ "David Liu", "Máté Lengyel" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ql6LVyi2Dg
@inproceedings{ zhu2023stability, title={Stability and Generalization of the Decentralized Stochastic Gradient Descent Ascent Algorithm}, author={Miaoxi Zhu and Li Shen and Bo Du and Dacheng Tao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=ql6LVyi2Dg} }
The growing size of available data has attracted increasing interest in solving minimax problems in a decentralized manner for various machine learning tasks. Previous theoretical research has primarily focused on the convergence rate and communication complexity of decentralized minimax algorithms, with little attention given to their generalization. In this paper, we investigate the primal-dual generalization bound of the decentralized stochastic gradient descent ascent (D-SGDA) algorithm using the approach of algorithmic stability under both convex-concave and nonconvex-nonconcave settings. Our theory refines the algorithmic stability in a decentralized manner and demonstrates that the decentralized structure does not destroy the stability and generalization of D-SGDA, implying that it can generalize as well as the vanilla SGDA in certain situations. Our results analyze the impact of different topologies on the generalization bound of the D-SGDA algorithm beyond trivial factors such as sample sizes, learning rates, and iterations. We also evaluate the optimization error and balance it with the generalization gap to obtain the optimal population risk of D-SGDA in the convex-concave setting. Additionally, we perform several numerical experiments which validate our theoretical findings.
Stability and Generalization of the Decentralized Stochastic Gradient Descent Ascent Algorithm
[ "Miaoxi Zhu", "Li Shen", "Bo Du", "Dacheng Tao" ]
Conference
poster
2310.20369
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qjqJL2lfkH
@inproceedings{ kim2023rank, title={Rank-1 Matrix Completion with Gradient Descent and Small Random Initialization}, author={Daesung Kim and Hye Won Chung}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qjqJL2lfkH} }
The nonconvex formulation of the matrix completion problem has received significant attention in recent years due to its affordable complexity compared to the convex formulation. Gradient Descent (GD) is a simple yet efficient baseline algorithm for solving nonconvex optimization problems. The success of GD has been witnessed in many different problems in both theory and practice when it is combined with random initialization. However, previous works on matrix completion require either careful initialization or regularizers to prove the convergence of GD. In this paper, we study the rank-1 symmetric matrix completion and prove that GD converges to the ground truth when small random initialization is used. We show that in a logarithmic number of iterations, the trajectory enters the region where local convergence occurs. We provide an upper bound on the initialization size that is sufficient to guarantee the convergence, and show that a larger initialization can be used as more samples are available. We observe that the implicit regularization effect of GD plays a critical role in the analysis, and for the entire trajectory, it prevents each entry from becoming much larger than the others.
Rank-1 Matrix Completion with Gradient Descent and Small Random Initialization
[ "Daesung Kim", "Hye Won Chung" ]
Conference
poster
2212.09396
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qjnl1QUnFA
@inproceedings{ kumar2023highfidelity, title={High-Fidelity Audio Compression with Improved {RVQGAN}}, author={Rithesh Kumar and Prem Seetharaman and Alejandro Luebs and Ishaan Kumar and Kundan Kumar}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qjnl1QUnFA} }
Language models have been successfully used to model natural signals, such as images, speech, and music. A key component of these models is a high quality neural compression model that can compress high-dimensional natural signals into lower dimensional discrete tokens. To that end, we introduce a high-fidelity universal neural audio compression algorithm that achieves ~90x compression of 44.1 KHz audio into tokens at just 8kbps bandwidth. We achieve this by combining advances in high-fidelity audio generation with better vector quantization techniques from the image domain, along with improved adversarial and reconstruction losses. We compress all domains (speech, environment, music, etc.) with a single universal model, making it widely applicable to generative modeling of all audio. We compare with competing audio compression algorithms, and find our method outperforms them significantly. We provide thorough ablations for every design choice, as well as open-source code and trained model weights. We hope our work can lay the foundation for the next generation of high-fidelity audio modeling.
High-Fidelity Audio Compression with Improved RVQGAN
[ "Rithesh Kumar", "Prem Seetharaman", "Alejandro Luebs", "Ishaan Kumar", "Kundan Kumar" ]
Conference
spotlight
2306.06546
[ "https://github.com/descriptinc/descript-audio-codec" ]
https://huggingface.co/papers/2306.06546
3
9
1
5
1
[ "parler-tts/dac_44khZ_8kbps", "descript/descript-audio-codec", "sarulab-speech/UTDUSS-Vocoder", "pharaouk/dac_44khZ_8kbps" ]
[]
[ "nvidia/BigVGAN" ]
null
https://openreview.net/forum?id=qieeNlO3C7
@inproceedings{ abbe2023transformers, title={Transformers learn through gradual rank increase}, author={Emmanuel Abbe and Samy Bengio and Enric Boix-Adser{\`a} and Etai Littwin and Joshua M. Susskind}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qieeNlO3C7} }
We identify incremental learning dynamics in transformers, where the difference between trained and initial weights progressively increases in rank. We rigorously prove this occurs under the simplifying assumptions of diagonal weight matrices and small initialization. Our experiments support the theory and also show that phenomenon can occur in practice without the simplifying assumptions.
Transformers learn through gradual rank increase
[ "Enric Boix-Adserà", "Etai Littwin", "Emmanuel Abbe", "Samy Bengio", "Joshua M. Susskind" ]
Conference
poster
2306.07042
[ "" ]
https://huggingface.co/papers/2306.07042
3
9
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=qgv56R2YJ7
@inproceedings{ epstein2023diffusion, title={Diffusion Self-Guidance for Controllable Image Generation}, author={Dave Epstein and Allan Jabri and Ben Poole and Alexei A Efros and Aleksander Holynski}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qgv56R2YJ7} }
Large-scale generative models are capable of producing high-quality images from detailed prompts. However, many aspects of an image are difficult or impossible to convey through text. We introduce self-guidance, a method that provides precise control over properties of the generated image by guiding the internal representations of diffusion models. We demonstrate that the size, location, and appearance of objects can be extracted from these representations, and show how to use them to steer the sampling process. Self-guidance operates similarly to standard classifier guidance, but uses signals present in the pretrained model itself, requiring no additional models or training. We demonstrate the flexibility and effectiveness of self-guided generation through a wide range of challenging image manipulations, such as modifying the position or size of a single object (keeping the rest of the image unchanged), merging the appearance of objects in one image with the layout of another, composing objects from multiple images into one, and more. We also propose a new method for reconstruction using self-guidance, which allows extending our approach to editing real images.
Diffusion Self-Guidance for Controllable Image Generation
[ "Dave Epstein", "Allan Jabri", "Ben Poole", "Alexei A Efros", "Aleksander Holynski" ]
Conference
poster
2306.00986
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qgmrC8jhCo
@inproceedings{ tsai2023convolutional, title={Convolutional Visual Prompt for Robust Visual Perception}, author={Yun-Yun Tsai and Chengzhi Mao and Junfeng Yang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qgmrC8jhCo} }
Vision models are often vulnerable to out-of-distribution (OOD) samples without adapting. While visual prompts offer a lightweight method of input-space adaptation for large-scale vision models, they rely on a high-dimensional additive vector and labeled data. This leads to overfitting when adapting models in a self-supervised test-time setting without labels. We introduce convolutional visual prompts (CVP) for label-free test-time adaptation for robust visual perception. The structured nature of CVP demands fewer trainable parameters, less than 1\% compared to standard visual prompts, combating overfitting. Extensive experiments and analysis on a wide variety of OOD visual perception tasks show that our approach is effective, improving robustness by up to 5.87\% over several large-scale models.
Convolutional Visual Prompt for Robust Visual Perception
[ "Yun-Yun Tsai", "Chengzhi Mao", "Junfeng Yang" ]
Conference
poster
2303.00198
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qgiG7WZohZ
@inproceedings{ velingker2023affinityaware, title={Affinity-Aware Graph Networks}, author={Ameya Velingker and Ali Kemal Sinop and Ira Ktena and Petar Veli{\v{c}}kovi{\'c} and Sreenivas Gollapudi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qgiG7WZohZ} }
Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data. Owing to the relatively limited number of message passing steps they perform—and hence a smaller receptive field—there has been significant interest in improving their expressivity by incorporating structural aspects of the underlying graph. In this paper, we explore the use of affinity measures as features in graph neural networks, in particular measures arising from random walks, including effective resistance, hitting and commute times. We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks. Our architecture has low computational complexity, while our features are invariant to the permutations of the underlying graph. The measures we compute allow the network to exploit the connectivity properties of the graph, thereby allowing us to outperform relevant benchmarks for a wide variety of tasks, often with significantly fewer message passing steps. On one of the largest publicly available graph regression datasets, OGB-LSC-PCQM4Mv1, we obtain the best known single-model validation MAE at the time of writing.
Affinity-Aware Graph Networks
[ "Ameya Velingker", "Ali Kemal Sinop", "Ira Ktena", "Petar Veličković", "Sreenivas Gollapudi" ]
Conference
poster
2206.11941
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qdsDy0zbn4
@inproceedings{ chen2023antn, title={{ANTN}: Bridging Autoregressive Neural Networks and Tensor Networks for Quantum Many-Body Simulation}, author={Zhuo Chen and Laker Newhouse and Eddie Chen and Di Luo and Marin Soljacic}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qdsDy0zbn4} }
Quantum many-body physics simulation has important impacts on understanding fundamental science and has applications to quantum materials design and quantum technology. However, due to the exponentially growing size of the Hilbert space with respect to the particle number, a direct simulation is intractable. While representing quantum states with tensor networks and neural networks are the two state-of-the-art methods for approximate simulations, each has its own limitations in terms of expressivity and inductive bias. To address these challenges, we develop a novel architecture, Autoregressive Neural TensorNet (ANTN), which bridges tensor networks and autoregressive neural networks. We show that Autoregressive Neural TensorNet parameterizes normalized wavefunctions, allows for exact sampling, generalizes the expressivity of tensor networks and autoregressive neural networks, and inherits a variety of symmetries from autoregressive neural networks. We demonstrate our approach on quantum state learning as well as finding the ground state of the challenging 2D $J_1$-$J_2$ Heisenberg model with different systems sizes and coupling parameters, outperforming both tensor networks and autoregressive neural networks. Our work opens up new opportunities for quantum many-body physics simulation, quantum technology design, and generative modeling in artificial intelligence.
ANTN: Bridging Autoregressive Neural Networks and Tensor Networks for Quantum Many-Body Simulation
[ "Zhuo Chen", "Laker Newhouse", "Eddie Chen", "Di Luo", "Marin Soljacic" ]
Conference
poster
2304.01996
[ "https://github.com/antn2023/antn" ]
https://huggingface.co/papers/2304.01996
1
0
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=qdM260dXsa
@inproceedings{ xu2023crossdomain, title={Cross-Domain Policy Adaptation via Value-Guided Data Filtering}, author={Kang Xu and Chenjia Bai and Xiaoteng Ma and Dong Wang and Bin Zhao and Zhen Wang and Xuelong Li and Wei Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qdM260dXsa} }
Generalizing policies across different domains with dynamics mismatch poses a significant challenge in reinforcement learning. For example, a robot learns the policy in a simulator, but when it is deployed in the real world, the dynamics of the environment may be different. Given the source and target domain with dynamics mismatch, we consider the online dynamics adaptation problem, in which case the agent can access sufficient source domain data while online interactions with the target domain are limited. Existing research has attempted to solve the problem from the dynamics discrepancy perspective. In this work, we reveal the limitations of these methods and explore the problem from the value difference perspective via a novel insight on the value consistency across domains. Specifically, we present the Value-Guided Data Filtering (VGDF) algorithm, which selectively shares transitions from the source domain based on the proximity of paired value targets across the two domains. Empirical results on various environments with kinematic and morphology shifts demonstrate that our method achieves superior performance compared to prior approaches.
Cross-Domain Policy Adaptation via Value-Guided Data Filtering
[ "Kang Xu", "Chenjia Bai", "Xiaoteng Ma", "Dong Wang", "Bin Zhao", "Zhen Wang", "Xuelong Li", "Wei Li" ]
Conference
poster
2305.17625
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qd9qcbVAwQ
@inproceedings{ zelikman2023parsel, title={Parsel🐍: Algorithmic Reasoning with Language Models by Composing Decompositions}, author={Eric Zelikman and Qian Huang and Gabriel Poesia and Noah Goodman and Nick Haber}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qd9qcbVAwQ} }
Despite recent success in large language model (LLM) reasoning, LLMs struggle with hierarchical multi-step reasoning tasks like generating complex programs. For these tasks, humans often start with a high-level algorithmic design and implement each part gradually. We introduce Parsel, a framework enabling automatic implementation and validation of complex algorithms with code LLMs. With Parsel, we automatically decompose algorithmic tasks into hierarchical natural language function descriptions and then search over combinations of possible function implementations using tests. We show that Parsel can be used across domains requiring hierarchical reasoning, including program synthesis and robotic planning. We find that, using Parsel, LLMs solve more competition-level problems in the APPS dataset, resulting in pass rates over 75\% higher than prior results from directly sampling AlphaCode and Codex, while often using a smaller sample budget. Moreover, with automatically generated tests, we find that Parsel can improve the state-of-the-art pass@1 performance on HumanEval from 67\% to 85\%. We also find that LLM-generated robotic plans using Parsel are more than twice as likely to be considered accurate than directly generated plans. Lastly, we explore how Parsel addresses LLM limitations and discuss how Parsel may be useful for human programmers. We release our code at https://github.com/ezelikman/parsel.
Parsel🐍: Algorithmic Reasoning with Language Models by Composing Decompositions
[ "Eric Zelikman", "Qian Huang", "Gabriel Poesia", "Noah Goodman", "Nick Haber" ]
Conference
spotlight
[ "https://github.com/ezelikman/parsel" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qcQhBli5Ho
@inproceedings{ caccia2023multihead, title={Multi-Head Adapter Routing for Cross-Task Generalization}, author={Lucas Caccia and Edoardo Ponti and Zhan Su and Matheus Pereira and Nicolas Le Roux and Alessandro Sordoni}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qcQhBli5Ho} }
Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists in pre-training adapters on a multi-task training set before few-shot adaptation to test tasks. Polytropon [Ponti et al., 2023] ($\texttt{Poly}$) jointly learns an inventory of adapters and a *routing* function that selects a (variable-size) subset of adapters for each task during both pre-training and few-shot adaptation. In this paper, we investigate the role that adapter routing plays in its success and design new variants based on our findings. First, we build on the intuition that finer-grained routing provides more expressivity. Hence, we propose $\texttt{MHR}$ (Multi-Head Routing) which combines *subsets* of adapter parameters and outperforms $\texttt{Poly}$ under a comparable parameter budget; by only fine-tuning the routing function and not the adapters ($\texttt{MHR}$-$z$) we achieve competitive performance with extreme parameter efficiency. Second, we find that $\texttt{Poly}$/$\texttt{MHR}$ performance is a result of better multi-task optimization, rather than modular inductive biases that facilitate adapter recombination and local adaptation, as previously hypothesized. In fact, we find that $\texttt{MHR}$ exhibits high gradient alignment between training tasks. We find that routing is most beneficial during multi-task pre-training rather than during few-shot adaptation and propose $\texttt{MHR}$-$\mu$, which discards routing and fine-tunes the average of the pre-trained adapters on each downstream tasks. This establishes $\texttt{MHR}$-$\mu$ as an effective method for single-adapter fine-tuning. We also show that $\texttt{MHR}$-$\mu$ can be used as an effective zero-shot transfer method by training the average of the pre-trained adapters for a few additional steps on the multi-task training set: this yields gains up to 3\% on absolute accuracy w.r.t. the baselines. Code is available at <https://github.com/microsoft/mttl>.
Multi-Head Adapter Routing for Cross-Task Generalization
[ "Lucas Caccia", "Edoardo Ponti", "Zhan Su", "Matheus Pereira", "Nicolas Le Roux", "Alessandro Sordoni" ]
Conference
poster
2211.03831
[ "https://github.com/microsoft/mttl" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qZjl2TKvUY
@inproceedings{ rigter2023one, title={One Risk to Rule Them All: A Risk-Sensitive Perspective on Model-Based Offline Reinforcement Learning}, author={Marc Rigter and Bruno Lacerda and Nick Hawes}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qZjl2TKvUY} }
Offline reinforcement learning (RL) is suitable for safety-critical domains where online exploration is not feasible. In such domains, decision-making should take into consideration the risk of catastrophic outcomes. In other words, decision-making should be *risk-averse*. An additional challenge of offline RL is avoiding *distributional shift*, i.e. ensuring that state-action pairs visited by the policy remain near those in the dataset. Previous offline RL algorithms that consider risk combine offline RL techniques (to avoid distributional shift), with risk-sensitive RL algorithms (to achieve risk-aversion). In this work, we propose risk-aversion as a mechanism to jointly address *both* of these issues. We propose a model-based approach, and use an ensemble of models to estimate epistemic uncertainty, in addition to aleatoric uncertainty. We train a policy that is risk-averse, and avoids high uncertainty actions. Risk-aversion to epistemic uncertainty prevents distributional shift, as areas not covered by the dataset have high epistemic uncertainty. Risk-aversion to aleatoric uncertainty discourages actions that are risky due to environment stochasticity. Thus, by considering epistemic uncertainty via a model ensemble and introducing risk-aversion, our algorithm (1R2R) avoids distributional shift in addition to achieving risk-aversion to aleatoric risk. Our experiments show that 1R2R achieves strong performance on deterministic benchmarks, and outperforms existing approaches for risk-sensitive objectives in stochastic domains.
One Risk to Rule Them All: A Risk-Sensitive Perspective on Model-Based Offline Reinforcement Learning
[ "Marc Rigter", "Bruno Lacerda", "Nick Hawes" ]
Conference
poster
2212.00124
[ "https://github.com/marc-rigter/1r2r" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qYAp31KwU2
@inproceedings{ chu2023multiobjective, title={Multi-Objective Intrinsic Reward Learning for Conversational Recommender Systems}, author={Zhendong Chu and Nan Wang and Hongning Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qYAp31KwU2} }
Conversational Recommender Systems (CRS) actively elicit user preferences to generate adaptive recommendations. Mainstream reinforcement learning-based CRS solutions heavily rely on handcrafted reward functions, which may not be aligned with user intent in CRS tasks. Therefore, the design of task-specific rewards is critical to facilitate CRS policy learning, which remains largely under-explored in the literature. In this work, we propose a novel approach to address this challenge by learning intrinsic rewards from interactions with users. Specifically, we formulate intrinsic reward learning as a multi-objective bi-level optimization problem. The inner level optimizes the CRS policy augmented by the learned intrinsic rewards, while the outer level drives the intrinsic rewards to optimize two CRS-specific objectives: maximizing the success rate and minimizing the number of turns to reach a successful recommendation}in conversations. To evaluate the effectiveness of our approach, we conduct extensive experiments on three public CRS benchmarks. The results show that our algorithm significantly improves CRS performance by exploiting informative learned intrinsic rewards.
Multi-Objective Intrinsic Reward Learning for Conversational Recommender Systems
[ "Zhendong Chu", "Nan Wang", "Hongning Wang" ]
Conference
poster
2310.20109
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qY7UqLoora
@inproceedings{ mustafa2023are, title={Are {GAT}s Out of Balance?}, author={Nimrah Mustafa and Aleksandar Bojchevski and Rebekka Burkholz}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qY7UqLoora} }
While the expressive power and computational capabilities of graph neural networks (GNNs) have been theoretically studied, their optimization and learning dynamics, in general, remain largely unexplored. Our study undertakes the Graph Attention Network (GAT), a popular GNN architecture in which a node's neighborhood aggregation is weighted by parameterized attention coefficients. We derive a conservation law of GAT gradient flow dynamics, which explains why a high portion of parameters in GATs with standard initialization struggle to change during training. This effect is amplified in deeper GATs, which perform significantly worse than their shallow counterparts. To alleviate this problem, we devise an initialization scheme that balances the GAT network. Our approach i) allows more effective propagation of gradients and in turn enables trainability of deeper networks, and ii) attains a considerable speedup in training and convergence time in comparison to the standard initialization. Our main theorem serves as a stepping stone to studying the learning dynamics of positive homogeneous models with attention mechanisms.
Are GATs Out of Balance?
[ "Nimrah Mustafa", "Aleksandar Bojchevski", "Rebekka Burkholz" ]
Conference
poster
2310.07235
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qVeDwgYsho
@inproceedings{ zeng2023copriv, title={CoPriv: Network/Protocol Co-Optimization for Communication-Efficient Private Inference}, author={Wenxuan Zeng and Meng Li and Haichuan Yang and Wen-jie Lu and Runsheng Wang and Ru Huang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qVeDwgYsho} }
Deep neural network (DNN) inference based on secure 2-party computation (2PC) can offer cryptographically-secure privacy protection but suffers from orders of magnitude latency overhead due to enormous communication. Previous works heavily rely on a proxy metric of ReLU counts to approximate the communication overhead and focus on reducing the ReLUs to improve the communication efficiency. However, we observe these works achieve limited communication reduction for state-of-the-art (SOTA) 2PC protocols due to the ignorance of other linear and non-linear operations, which now contribute to the majority of communication. In this work, we present CoPriv, a framework that jointly optimizes the 2PC inference protocol and the DNN architecture. CoPriv features a new 2PC protocol for convolution based on Winograd transformation and develops DNN-aware optimization to significantly reduce the inference communication. CoPriv further develops a 2PC-aware network optimization algorithm that is compatible with the proposed protocol and simultaneously reduces the communication for all the linear and non-linear operations. We compare CoPriv with the SOTA 2PC protocol, CrypTFlow2, and demonstrate 2.1× communication reduction for both ResNet-18 and ResNet-32 on CIFAR-100. We also compare CoPriv with SOTA network optimization methods, including SNL, MetaPruning, etc. CoPriv achieves 9.98× and 3.88× online and total communication reduction with a higher accuracy compare to SNL, respectively. CoPriv also achieves 3.87× online communication reduction with more than 3% higher accuracy compared to MetaPruning.
CoPriv: Network/Protocol Co-Optimization for Communication-Efficient Private Inference
[ "Wenxuan Zeng", "Meng Li", "Haichuan Yang", "Wen-jie Lu", "Runsheng Wang", "Ru Huang" ]
Conference
poster
2311.01737
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qVMPXrX4FR
@inproceedings{ shi2023lambdabeam, title={LambdaBeam: Neural Program Search with Higher-Order Functions and Lambdas}, author={Kensen Shi and Hanjun Dai and Wen-Ding Li and Kevin Ellis and Charles Sutton}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qVMPXrX4FR} }
Search is an important technique in program synthesis that allows for adaptive strategies such as focusing on particular search directions based on execution results. Several prior works have demonstrated that neural models are effective at guiding program synthesis searches. However, a common drawback of those approaches is the inability to handle iterative loops, higher-order functions, or lambda functions, thus limiting prior neural searches from synthesizing longer and more general programs. We address this gap by designing a search algorithm called LambdaBeam that can construct arbitrary lambda functions that compose operations within a given DSL. We create semantic vector representations of the execution behavior of the lambda functions and train a neural policy network to choose which lambdas to construct during search, and pass them as arguments to higher-order functions to perform looping computations. Our experiments show that LambdaBeam outperforms neural, symbolic, and LLM-based techniques in an integer list manipulation domain.
LambdaBeam: Neural Program Search with Higher-Order Functions and Lambdas
[ "Kensen Shi", "Hanjun Dai", "Wen-Ding Li", "Kevin Ellis", "Charles Sutton" ]
Conference
poster
2306.02049
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qUlpDjYnsp
@inproceedings{ cho2023multiresolution, title={Multi-resolution Spectral Coherence for Graph Generation with Score-based Diffusion}, author={Hyuna Cho and Minjae Jeong and Sooyeon Jeon and Sungsoo Ahn and Won Hwa Kim}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qUlpDjYnsp} }
Successful graph generation depends on the accurate estimation of the joint distribution of graph components such as nodes and edges from training data. While recent deep neural networks have demonstrated sampling of realistic graphs together with diffusion models, however, they still suffer from oversmoothing problems which are inherited from conventional graph convolution and thus high-frequency characteristics of nodes and edges become intractable. To overcome such issues and generate graphs with high fidelity, this paper introduces a novel approach that captures the dependency between nodes and edges at multiple resolutions in the spectral space. By modeling the joint distribution of node and edge signals in a shared graph wavelet space, together with a score-based diffusion model, we propose a Wavelet Graph Diffusion Model (Wave-GD) which lets us sample synthetic graphs with real-like frequency characteristics of nodes and edges. Experimental results on four representative benchmark datasets validate the superiority of the Wave-GD over existing approaches, highlighting its potential for a wide range of applications that involve graph data.
Multi-resolution Spectral Coherence for Graph Generation with Score-based Diffusion
[ "Hyuna Cho", "Minjae Jeong", "Sooyeon Jeon", "Sungsoo Ahn", "Won Hwa Kim" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qSS9izTOpo
@inproceedings{ fang2023alleviating, title={Alleviating the Semantic Gap for Generalized f{MRI}-to-Image Reconstruction}, author={Tao Fang and Qian Zheng and Gang Pan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qSS9izTOpo} }
Although existing fMRI-to-image reconstruction methods could predict high-quality images, they do not explicitly consider the semantic gap between training and testing data, resulting in reconstruction with unstable and uncertain semantics. This paper addresses the problem of generalized fMRI-to-image reconstruction by explicitly alleviates the semantic gap. Specifically, we leverage the pre-trained CLIP model to map the training data to a compact feature representation, which essentially extends the sparse semantics of training data to dense ones, thus alleviating the semantic gap of the instances nearby known concepts (i.e., inside the training super-classes). Inspired by the robust low-level representation in fMRI data, which could help alleviate the semantic gap for instances that far from the known concepts (i.e., outside the training super-classes), we leverage structural information as a general cue to guide image reconstruction. Further, we quantify the semantic uncertainty based on probability density estimation and achieve Generalized fMRI-to-image reconstruction by adaptively integrating Expanded Semantics and Structural information (GESS) within a diffusion process. Experimental results demonstrate that the proposed GESS model outperforms state-of-the-art methods, and we propose a generalized scenario split strategy to evaluate the advantage of GESS in closing the semantic gap.
Alleviating the Semantic Gap for Generalized fMRI-to-Image Reconstruction
[ "Tao Fang", "Qian Zheng", "Gang Pan" ]
Conference
spotlight
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qSCziWQBPD
@inproceedings{ frei2023the, title={The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in Re{LU} Networks}, author={Spencer Frei and Gal Vardi and Peter Bartlett and Nathan Srebro}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qSCziWQBPD} }
In this work, we study the implications of the implicit bias of gradient flow on generalization and adversarial robustness in ReLU networks. We focus on a setting where the data consists of clusters and the correlations between cluster means are small, and show that in two-layer ReLU networks gradient flow is biased towards solutions that generalize well, but are vulnerable to adversarial examples. Our results hold even in cases where the network is highly overparameterized. Despite the potential for harmful overfitting in such settings, we prove that the implicit bias of gradient flow prevents it. However, the implicit bias also leads to non-robust solutions (susceptible to small adversarial $\ell_2$-perturbations), even though robust networks that fit the data exist.
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks
[ "Spencer Frei", "Gal Vardi", "Peter Bartlett", "Nathan Srebro" ]
Conference
poster
2303.01456
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qS9aHF8bXz
@inproceedings{ zhu2023provably, title={Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability}, author={Hanlin Zhu and Amy Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qS9aHF8bXz} }
Goal-conditioned reinforcement learning (GCRL) refers to learning general-purpose skills that aim to reach diverse goals. In particular, offline GCRL only requires purely pre-collected datasets to perform training tasks without additional interactions with the environment. Although offline GCRL has become increasingly prevalent and many previous works have demonstrated its empirical success, the theoretical understanding of efficient offline GCRL algorithms is not well established, especially when the state space is huge and the offline dataset only covers the policy we aim to learn. In this paper, we provide a rigorous theoretical analysis of an existing empirically successful offline GCRL algorithm. We prove that under slight modification, this algorithm enjoys an $\tilde{O}(\text{poly}(1/\epsilon))$ sample complexity (where $\epsilon$ is the desired suboptimality of the learned policy) with general function approximation thanks to the property of (semi-)strong convexity of the objective functions. We only require nearly minimal assumptions on the dataset (single-policy concentrability) and the function class (realizability). Moreover, this algorithm consists of two uninterleaved optimization steps, which we refer to as $V$-learning and policy learning, and is computationally stable since it does not involve minimax optimization. We also empirically validate our theory by showing that the modified algorithm outperforms the previous algorithm in various real-world environments. To the best of our knowledge, this is the first algorithm that is both provably efficient with general function approximation and single-policy concentrability, and empirically successful without requiring solving minimax optimization problems.
Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability
[ "Hanlin Zhu", "Amy Zhang" ]
Conference
poster
2302.03770
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qQnO1HLQHe
@inproceedings{ bai2023complex, title={Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints}, author={Jiaxin Bai and Xin Liu and Weiqi Wang and Chen Luo and Yangqiu Song}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qQnO1HLQHe} }
Querying knowledge graphs (KGs) using deep learning approaches can naturally leverage the reasoning and generalization ability to learn to infer better answers. Traditional neural complex query answering (CQA) approaches mostly work on entity-centric KGs. However, in the real world, we also need to make logical inferences about events, states, and activities (i.e., eventualities or situations) to push learning systems from System I to System II, as proposed by Yoshua Bengio. Querying logically from an EVentuality-centric KG (EVKG) can naturally provide references to such kind of intuitive and logical inference. Thus, in this paper, we propose a new framework to leverage neural methods to answer complex logical queries based on an EVKG, which can satisfy not only traditional first-order logic constraints but also implicit logical constraints over eventualities concerning their occurrences and orders. For instance, if we know that *Food is bad* happens before *PersonX adds soy sauce*, then *PersonX adds soy sauce* is unlikely to be the cause of *Food is bad* due to implicit temporal constraint. To facilitate consistent reasoning on EVKGs, we propose Complex Eventuality Query Answering (CEQA), a more rigorous definition of CQA that considers the implicit logical constraints governing the temporal order and occurrence of eventualities. In this manner, we propose to leverage theorem provers for constructing benchmark datasets to ensure the answers satisfy implicit logical constraints. We also propose a Memory-Enhanced Query Encoding (MEQE) approach to significantly improve the performance of state-of-the-art neural query encoders on the CEQA task.
Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints
[ "Jiaxin Bai", "Xin Liu", "Weiqi Wang", "Chen Luo", "Yangqiu Song" ]
Conference
poster
2305.19068
[ "https://github.com/hkust-knowcomp/ceqa" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qPyvuFT0U9
@inproceedings{ zhang2023statistical, title={Statistical Insights into {HSIC} in High Dimensions}, author={Tao Zhang and Yaowu Zhang and Tingyou Zhou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qPyvuFT0U9} }
Measuring the nonlinear dependence between random vectors and testing for their statistical independence is a fundamental problem in statistics. One of the most popular dependence measures is the Hilbert-Schmidt independence criterion (HSIC), which has attracted increasing attention in recent years. However, most existing works have focused on either fixed or very high-dimensional covariates. In this work, we bridge the gap between these two scenarios and provide statistical insights into the performance of HSIC when the dimensions grow at different rates. We first show that, under the null hypothesis, the rescaled HSIC converges in distribution to a standard normal distribution. Then we provide a general condition for the HSIC based tests to have nontrivial power in high dimensions. By decomposing this condition, we illustrate how the ability of HSIC to measure nonlinear dependence changes with increasing dimensions. Moreover, we demonstrate that, depending on the sample size, the covariate dimensions and the dependence structures within covariates, the HSIC can capture different types of associations between random vectors. We also conduct extensive numerical studies to validate our theoretical results.
Statistical Insights into HSIC in High Dimensions
[ "Tao Zhang", "Yaowu Zhang", "Tingyou Zhou" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qPUbKxKvXq
@inproceedings{ agrawal2023monitorguided, title={Monitor-Guided Decoding of Code {LM}s with Static Analysis of Repository Context}, author={Lakshya Agrawal and Aditya Kanade and Navin Goyal and Shuvendu K Lahiri and Sriram Rajamani}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qPUbKxKvXq} }
Language models of code (LMs) work well when the surrounding code provides sufficient context. This is not true when it becomes necessary to use types, functionality or APIs defined elsewhere in the repository or a linked library, especially those not seen during training. LMs suffer from limited awareness of such global context and end up hallucinating. Integrated development environments (IDEs) assist developers in understanding repository context using static analysis. We extend this assistance, enjoyed by developers, to LMs. We propose monitor-guided decoding (MGD) where a monitor uses static analysis to guide the decoding. We construct a repository-level dataset PragmaticCode for method-completion in Java and evaluate MGD on it. On models of varying parameter scale, by monitoring for type-consistent object dereferences, MGD consistently improves compilation rates and agreement with ground truth. Further, LMs with fewer parameters, when augmented with MGD, can outperform larger LMs. With MGD, SantaCoder-1.1B achieves better compilation rate and next-identifier match than the much larger text-davinci-003 model. We also conduct a generalizability study to evaluate the ability of MGD to generalize to multiple programming languages (Java, C# and Rust), coding scenarios (e.g., correct number of arguments to method calls), and to enforce richer semantic constraints (e.g., stateful API protocols). Our data and implementation are available at https://github.com/microsoft/monitors4codegen.
Monitor-Guided Decoding of Code LMs with Static Analysis of Repository Context
[ "Lakshya Agrawal", "Aditya Kanade", "Navin Goyal", "Shuvendu K Lahiri", "Sriram Rajamani" ]
Conference
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qP0Drg2HuH
@inproceedings{ wu2023read, title={Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals}, author={Yue Wu and Yewen Fan and Paul Pu Liang and Amos Azaria and Yuanzhi Li and Tom Mitchell}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qP0Drg2HuH} }
High sample complexity has long been a challenge for RL. On the other hand, humans learn to perform tasks not only from interaction or demonstrations, but also by reading unstructured text documents, e.g., instruction manuals. Instruction manuals and wiki pages are among the most abundant data that could inform agents of valuable features and policies or task-specific environmental dynamics and reward structures. Therefore, we hypothesize that the ability to utilize human-written instruction manuals to assist learning policies for specific tasks should lead to a more efficient and better-performing agent. We propose the Read and Reward framework. Read and Reward speeds up RL algorithms on Atari games by reading manuals released by the Atari game developers. Our framework consists of a QA Extraction module that extracts and summarizes relevant information from the manual and a Reasoning module that evaluates object-agent interactions based on information from the manual. An auxiliary reward is then provided to a standard A2C RL agent, when interaction is detected. Experimentally, various RL algorithms obtain significant improvement in performance and training speed when assisted by our design. Code at github.com/Holmeswww/RnR
Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
[ "Yue Wu", "Yewen Fan", "Paul Pu Liang", "Amos Azaria", "Yuanzhi Li", "Tom Mitchell" ]
Conference
poster
2302.04449
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qL3zPoWJda
@inproceedings{ vijayan2023trire, title={Tri{RE}: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion}, author={Preetha Vijayan and Prashant Shivaram Bhat and Bahram Zonooz and Elahe Arani}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, url={https://openreview.net/forum?id=qL3zPoWJda} }
Continual learning (CL) has remained a persistent challenge for deep neural networks due to catastrophic forgetting (CF) of previously learned tasks. Several techniques such as weight regularization, experience rehearsal, and parameter isolation have been proposed to alleviate CF. Despite their relative success, these research directions have predominantly remained orthogonal and suffer from several shortcomings, while missing out on the advantages of competing strategies. On the contrary, the brain continually learns, accommodates, and transfers knowledge across tasks by simultaneously leveraging several neurophysiological processes, including neurogenesis, active forgetting, neuromodulation, metaplasticity, experience rehearsal, and context-dependent gating, rarely resulting in CF. Inspired by how the brain exploits multiple mechanisms concurrently, we propose TriRE, a novel CL paradigm that encompasses retaining the most prominent neurons for each task, revising and solidifying the extracted knowledge of current and past tasks, and actively promoting less active neurons for subsequent tasks through rewinding and relearning. Across CL settings, TriRE significantly reduces task interference and surpasses different CL approaches considered in isolation.
TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion
[ "Preetha Vijayan", "Prashant Shivaram Bhat", "Bahram Zonooz", "Elahe Arani" ]
Conference
poster
2310.08217
[ "https://github.com/NeurAI-Lab/TriRE" ]
-1
-1
-1
-1
0
[]
[]
[]