bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
11
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=b8xowIlZ7v | @inproceedings{
hooper2023a,
title={A case for reframing automated medical image classification as segmentation},
author={Sarah Hooper and Mayee F Chen and Khaled Kamal Saab and Kush Bhatia and Curtis Langlotz and Christopher Re},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=b8xowIlZ7v}
} | Image classification and segmentation are common applications of deep learning to radiology. While many tasks can be framed using either classification or segmentation, classification has historically been cheaper to label and more widely used. However, recent work has drastically reduced the cost of training segmentation networks. In light of this recent work, we reexamine the choice of training classification vs. segmentation models. First, we use an information theoretic approach to analyze why segmentation vs. classification models may achieve different performance on the same dataset and overarching task. We then implement multiple methods for using segmentation models to classify medical images, which we call *segmentation-for-classification*, and compare these methods against traditional classification on three retrospective datasets. We use our analysis and experiments to summarize the benefits of switching from segmentation to classification, including: improved sample efficiency, enabling improved performance with fewer labeled images (up to an order of magnitude lower), on low-prevalence classes, and on certain rare subgroups (up to 161.1\% improved recall); improved robustness to spurious correlations (up to 44.8\% improved robust AUROC); and improved model interpretability, evaluation, and error analysis. | A case for reframing automated medical image classification as segmentation | [
"Sarah Hooper",
"Mayee F Chen",
"Khaled Kamal Saab",
"Kush Bhatia",
"Curtis Langlotz",
"Christopher Re"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=b6XvK2de99 | @inproceedings{
geng2023onestep,
title={One-Step Diffusion Distillation via Deep Equilibrium Models},
author={Zhengyang Geng and Ashwini Pokle and J Zico Kolter},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=b6XvK2de99}
} | Diffusion models excel at producing high-quality samples but naively require hundreds of iterations, prompting multiple attempts to distill the generation process into a faster network. However, many existing approaches suffer from a variety of challenges: the process for distillation training can be complex, often requiring multiple training stages, and the resulting models perform poorly when utilized in single-step generative applications. In this paper, we introduce a simple yet effective means of distilling diffusion models *directly* from the initial noise to the resulting image. Of particular importance to our approach is to leverage a new Deep Equilibrium (DEQ) model as the distilled architecture: the Generative Equilibrium Transformer (GET). Our method enables fully offline training with just noise/image pairs from the diffusion model while achieving superior performance compared to existing one-step methods on comparable training budgets. We demonstrate that the DEQ architecture is crucial to this capability, as GET matches a $5\times$ larger ViT in terms of FID scores while striking a critical balance of computational cost and image quality. Code, checkpoints, and datasets are available [here](https://github.com/locuslab/get). | One-Step Diffusion Distillation via Deep Equilibrium Models | [
"Zhengyang Geng",
"Ashwini Pokle",
"J Zico Kolter"
] | Conference | poster | 2401.08639 | [
"https://github.com/locuslab/get"
] | https://huggingface.co/papers/2401.08639 | 1 | 0 | 0 | 3 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=b6FeLpKKjl | @inproceedings{
ward2023convergence,
title={Convergence of Alternating Gradient Descent for Matrix Factorization},
author={Rachel Ward and Tamara G. Kolda},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=b6FeLpKKjl}
} | We consider alternating gradient descent (AGD) with fixed step size applied to the asymmetric matrix factorization objective.
We show that, for a rank-$r$ matrix $A \in \mathbb{R}^{m \times n}$,
$T = C ( \frac{\sigma_1(A)}{\sigma_r(A)} )^2 \log(1/\epsilon)$
iterations of alternating gradient descent suffice to reach an $\epsilon$-optimal factorization
$\| A - X_{T} Y_{T}' \|^2 \leq \epsilon \| A \|^2$ with high probability
starting from an atypical random initialization. The
factors have rank $d \geq r$ so that $X_{T}\in \mathbb{R}^{m \times d}$ and $Y_{T} \in\mathbb{R}^{n \times d}$, and mild overparameterization suffices for the constant $C$ in the iteration complexity $T$ to be an absolute constant.
Experiments suggest that our proposed initialization is not merely of theoretical benefit, but rather significantly improves the convergence rate of gradient descent in practice. Our proof is conceptually simple: a uniform Polyak-Lojasiewicz (PL) inequality and uniform Lipschitz smoothness constant are guaranteed for a sufficient number of iterations, starting from our random initialization. Our proof method should be useful for extending and simplifying convergence analyses for a broader class of nonconvex low-rank factorization problems. | Convergence of Alternating Gradient Descent for Matrix Factorization | [
"Rachel Ward",
"Tamara G. Kolda"
] | Conference | spotlight | 2305.06927 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=b60wLlkBta | @inproceedings{
lin2023on,
title={On the Robustness of Removal-Based Feature Attributions},
author={Chris Lin and Ian Connick Covert and Su-In Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=b60wLlkBta}
} | To explain predictions made by complex machine learning models, many feature attribution methods have been developed that assign importance scores to input features. Some recent work challenges the robustness of these methods by showing that they are sensitive to input and model perturbations, while other work addresses this issue by proposing robust attribution methods. However, previous work on attribution robustness has focused primarily on gradient-based feature attributions, whereas the robustness of removal-based attribution methods is not currently well understood. To bridge this gap, we theoretically characterize the robustness properties of removal-based feature attributions. Specifically, we provide a unified analysis of such methods and derive upper bounds for the difference between intact and perturbed attributions, under settings of both input and model perturbations. Our empirical results on synthetic and real-world data validate our theoretical results and demonstrate their practical implications, including the ability to increase attribution robustness by improving the model’s Lipschitz regularity. | On the Robustness of Removal-Based Feature Attributions | [
"Chris Lin",
"Ian Connick Covert",
"Su-In Lee"
] | Conference | poster | 2306.07462 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=b5R8mbqo9Q | @inproceedings{
liang2023a,
title={A Heavy-Tailed Algebra for Probabilistic Programming},
author={Feynman T. Liang and Liam Hodgkinson and Michael W. Mahoney},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=b5R8mbqo9Q}
} | Despite the successes of probabilistic models based on passing noise through neural networks, recent work has identified that such methods often fail to capture tail behavior accurately---unless the tails of the base distribution are appropriately calibrated. To overcome this deficiency, we propose a systematic approach for analyzing the tails of random variables, and we illustrate how this approach can be used during the static analysis (before drawing samples) pass of a probabilistic programming language (PPL) compiler. To characterize how the tails change under various operations, we develop an algebra which acts on a three-parameter family of tail asymptotics and which is based on the generalized Gamma distribution. Our algebraic operations are closed under addition and multiplication; they are capable of distinguishing sub-Gaussians with differing scales; and they handle ratios sufficiently well to reproduce the tails of most important statistical distributions directly from their definitions. Our empirical results confirm that inference algorithms that leverage our heavy-tailed algebra attain superior performance across a number of density modeling and variational inference (VI) tasks. | A Heavy-Tailed Algebra for Probabilistic Programming | [
"Feynman T. Liang",
"Liam Hodgkinson",
"Michael W. Mahoney"
] | Conference | poster | 2306.09262 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=b2wSODM7iG | @inproceedings{
gupta2023lightspeed,
title={LightSpeed: Light and Fast Neural Light Fields on Mobile Devices},
author={Aarush Gupta and Junli Cao and Chaoyang Wang and Ju Hu and Sergey Tulyakov and Jian Ren and Laszlo Attila Jeni},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=b2wSODM7iG}
} | Real-time novel-view image synthesis on mobile devices is prohibitive due to the limited computational power and storage. Using volumetric rendering methods, such as NeRF and its derivatives, on mobile devices is not suitable due to the high computational cost of volumetric rendering. On the other hand, recent advances in neural light field representations have shown promising real-time view synthesis results on mobile devices. Neural light field methods learn a direct mapping from a ray representation to the pixel color. The current choice of ray representation is either stratified ray sampling or Plücker coordinates, overlooking the classic light slab (two-plane) representation, the preferred representation to interpolate between light field views. In this work, we find that using the light slab representation is an efficient representation for learning a neural light field. More importantly, it is a lower-dimensional ray representation enabling us to learn the 4D ray space using feature grids which are significantly faster to train and render. Although mostly designed for frontal views, we show that the light-slab representation can be further extended to non-frontal scenes using a divide-and-conquer strategy. Our method provides better rendering quality than prior light field methods and a significantly better trade-off between rendering quality and speed than prior light field methods. | LightSpeed: Light and Fast Neural Light Fields on Mobile Devices | [
"Aarush Gupta",
"Junli Cao",
"Chaoyang Wang",
"Ju Hu",
"Sergey Tulyakov",
"Jian Ren",
"Laszlo Attila Jeni"
] | Conference | poster | 2310.16832 | [
""
] | https://huggingface.co/papers/2310.16832 | 4 | 4 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=b1JPBGJhUi | @inproceedings{
pethick2023stable,
title={Stable Nonconvex-Nonconcave Training via Linear Interpolation},
author={Thomas Pethick and Wanyun Xie and Volkan Cevher},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=b1JPBGJhUi}
} | This paper presents a theoretical analysis of linear interpolation as a principled method for stabilizing (large-scale) neural network training. We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear interpolation can help by leveraging the theory of nonexpansive operators. We construct a new optimization scheme called relaxed approximate proximal point (RAPP), which is the first 1-SCLI method to achieve last iterate convergence rates for $\rho$-comonotone problems while only requiring $\rho > -\tfrac{1}{2L}$. The construction extends to constrained and regularized settings. By replacing the inner optimizer in RAPP we rediscover the family of Lookahead algorithms for which we establish convergence in cohypomonotone problems even when the base optimizer is taken to be gradient descent ascent. The range of cohypomonotone problems in which Lookahead converges is further expanded by exploiting that Lookahead inherits the properties of the base optimizer. We corroborate the results with experiments on generative adversarial networks which demonstrates the benefits of the linear interpolation present in both RAPP and Lookahead. | Stable Nonconvex-Nonconcave Training via Linear Interpolation | [
"Thomas Pethick",
"Wanyun Xie",
"Volkan Cevher"
] | Conference | spotlight | 2310.13459 | [
"https://github.com/LIONS-EPFL/linear-interpolation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=b1BhHjBxsx | @inproceedings{
feldman2023sharp,
title={Sharp Recovery Thresholds of Tensor {PCA} Spectral Algorithms},
author={Michael Jacob Feldman and David Donoho},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=b1BhHjBxsx}
} | Many applications seek to recover low-rank approximations of noisy tensor data. We consider several practical and effective matricization strategies which construct specific matrices from such tensors and then apply spectral methods; the strategies include tensor unfolding, partial tracing, power iteration, and recursive unfolding. We settle the behaviors of unfolding and partial tracing, identifying sharp thresholds in signal-to-noise ratio above which the signal is partially recovered. In particular, we extend previous results to a much larger class of tensor shapes where axis lengths may be different. For power iteration and recursive unfolding, we prove that under conditions where previous algorithms partially recovery the signal, these methods achieve (asymptotically) exact recovery. Our analysis deploys random matrix theory to obtain sharp thresholds which elude perturbation and concentration bounds. Specifically, we rely upon recent disproportionate random matrix results, which describe sequences of matrices with diverging aspect ratio. | Sharp Recovery Thresholds of Tensor PCA Spectral Algorithms | [
"Michael Jacob Feldman",
"David Donoho"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ayZpFoAu5c | @inproceedings{
razin2023on,
title={On the Ability of Graph Neural Networks to Model Interactions Between Vertices},
author={Noam Razin and Tom Verbin and Nadav Cohen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ayZpFoAu5c}
} | Graph neural networks (GNNs) are widely used for modeling complex interactions between entities represented as vertices of a graph. Despite recent efforts to theoretically analyze the expressive power of GNNs, a formal characterization of their ability to model interactions is lacking. The current paper aims to address this gap. Formalizing strength of interactions through an established measure known as separation rank, we quantify the ability of certain GNNs to model interaction between a given subset of vertices and its complement, i.e. between the sides of a given partition of input vertices. Our results reveal that the ability to model interaction is primarily determined by the partition's walk index --- a graph-theoretical characteristic defined by the number of walks originating from the boundary of the partition. Experiments with common GNN architectures corroborate this finding. As a practical application of our theory, we design an edge sparsification algorithm named Walk Index Sparsification (WIS), which preserves the ability of a GNN to model interactions when input edges are removed. WIS is simple, computationally efficient, and in our experiments has markedly outperformed alternative methods in terms of induced prediction accuracy. More broadly, it showcases the potential of improving GNNs by theoretically analyzing the interactions they can model. | On the Ability of Graph Neural Networks to Model Interactions Between Vertices | [
"Noam Razin",
"Tom Verbin",
"Nadav Cohen"
] | Conference | poster | 2211.16494 | [
"https://github.com/noamrazin/gnn_interactions"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=axmY49ahVI | @inproceedings{
pacchiano2023experiment,
title={Experiment Planning with Function Approximation},
author={Aldo Pacchiano and Jonathan Lee and Emma Brunskill},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=axmY49ahVI}
} | We study the problem of experiment planning with function approximation in contextual bandit problems. In settings where there is a significant overhead to deploying adaptive algorithms---for example, when the execution of the data collection policies is required to be distributed, or a human in the loop is needed to implement these policies---producing in advance a set of policies for data collection is paramount. We study the setting where a large dataset of contexts but not rewards is available and may be used by the learner to design an effective data collection strategy. Although when rewards are linear this problem has been well studied, results are still missing for more complex reward models. In this work we propose two experiment planning strategies compatible with function approximation. The first is an eluder planning and sampling procedure that can recover optimality guarantees depending on the eluder dimension of the reward function class. For the second, we show that a uniform sampler achieves competitive optimality rates in the setting where the number of actions is small. We finalize our results introducing a statistical gap fleshing out the fundamental differences between planning and adaptive learning and provide results for planning with model selection. | Experiment Planning with Function Approximation | [
"Aldo Pacchiano",
"Jonathan Lee",
"Emma Brunskill"
] | Conference | poster | 2401.05193 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=axRMkinASf | @inproceedings{
flamich2023greedy,
title={Greedy Poisson Rejection Sampling},
author={Gergely Flamich},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=axRMkinASf}
} | One-shot channel simulation is a fundamental data compression problem concerned with encoding a single sample from a target distribution $Q$ using a coding distribution $P$ using as few bits as possible on average. Algorithms that solve this problem find applications in neural data compression and differential privacy and can serve as a more efficient and natural alternative to quantization-based methods. Unfortunately, existing solutions are too slow or have limited applicability, preventing their widespread adaptation. In this paper, we conclusively solve one-shot channel simulation for one-dimensional problems where the target-proposal density ratio is unimodal by describing an algorithm with optimal runtime. We achieve this by constructing a rejection sampling procedure equivalent to greedily searching over the points of a Poisson process. Hence, we call our algorithm greedy Poisson rejection sampling (GPRS) and analyze the correctness and time complexity of several of its variants. Finally, we empirically verify our theorems, demonstrating that GPRS significantly outperforms the current state-of-the-art method, A* coding. | Greedy Poisson Rejection Sampling | [
"Gergely Flamich"
] | Conference | poster | 2305.15313 | [
"https://github.com/gergely-flamich/greedy-poisson-rejection-sampling"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=awbWWO0nb6 | @inproceedings{
xu2023characterization,
title={Characterization of Overfitting in Robust Multiclass Classification},
author={Jingyuan Xu and Weiwei Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=awbWWO0nb6}
} | This paper considers the following question: Given the number of classes m, the number of robust accuracy queries k, and the number of test examples in the dataset n, how much can adaptive algorithms robustly overfit the test dataset? We solve this problem by equivalently giving near-matching upper and lower bounds of the robust overfitting bias in multiclass classification problems. | Characterization of Overfitting in Robust Multiclass Classification | [
"Jingyuan Xu",
"Weiwei Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=awIpKpwTwF | @inproceedings{
belrose2023leace,
title={{LEACE}: Perfect linear concept erasure in closed form},
author={Nora Belrose and David Schneider-Joseph and Shauli Ravfogel and Ryan Cotterell and Edward Raff and Stella Biderman},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=awIpKpwTwF}
} | Concept erasure aims to remove specified features from a representation. It can improve fairness (e.g. preventing a classifier from using gender or race) and interpretability (e.g. removing a concept to observe changes in model behavior). We introduce LEAst-squares Concept Erasure (LEACE), a closed-form method which provably prevents all linear classifiers from detecting a concept while changing the representation as little as possible, as measured by a broad class of norms. We apply LEACE to large language models with a novel procedure called concept scrubbing, which erases target concept information from _every_ layer in the network. We demonstrate our method on two tasks: measuring the reliance of language models on part-of-speech information, and reducing gender bias in BERT embeddings. Our code is available at https://github.com/EleutherAI/concept-erasure. | LEACE: Perfect linear concept erasure in closed form | [
"Nora Belrose",
"David Schneider-Joseph",
"Shauli Ravfogel",
"Ryan Cotterell",
"Edward Raff",
"Stella Biderman"
] | Conference | poster | 2306.03819 | [
"https://github.com/eleutherai/concept-erasure"
] | https://huggingface.co/papers/2306.03819 | 3 | 2 | 0 | 6 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=aw1vLo7TE7 | @inproceedings{
qin2023riskaverse,
title={Risk-Averse Active Sensing for Timely Outcome Prediction under Cost Pressure},
author={Yuchao Qin and Mihaela van der Schaar and Changhee Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aw1vLo7TE7}
} | Timely outcome prediction is essential in healthcare to enable early detection and intervention of adverse events. However, in longitudinal follow-ups to patients' health status, cost-efficient acquisition of patient covariates is usually necessary due to the significant expense involved in screening and lab tests. To balance the timely and accurate outcome predictions with acquisition costs, an effective active sensing strategy is crucial. In this paper, we propose a novel risk-averse active sensing approach RAS that addresses the composite decision problem of when to conduct the acquisition and which measurements to make. Our approach decomposes the policy into two sub-policies: acquisition scheduler and feature selector, respectively. Moreover, we introduce a novel risk-aversion training strategy to focus on the underrepresented subgroup of high-risk patients for whom timely and accurate prediction of disease progression is of greater value. Our method outperforms baseline active sensing approaches in experiments with both synthetic and real-world datasets, and we illustrate the significance of our policy decomposition and the necessity of a risk-averse sensing policy through case studies. | Risk-Averse Active Sensing for Timely Outcome Prediction under Cost Pressure | [
"Yuchao Qin",
"Mihaela van der Schaar",
"Changhee Lee"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=avuRopYsCg | @inproceedings{
cao2023discovering,
title={Discovering Intrinsic Spatial-Temporal Logic Rules to Explain Human Actions},
author={Chengzhi Cao and Chao Yang and Ruimao Zhang and Shuang Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=avuRopYsCg}
} | We propose an interpretable model to uncover the behavioral patterns of human movements by analyzing their trajectories. Our approach is based on the belief that human actions are driven by intentions and are influenced by environmental factors such as spatial relationships with surrounding objects. To model this, we use a set of spatial-temporal logic rules that include intention variables as principles. These rules are automatically discovered and used to capture the dynamics of human actions. To learn the model parameters and rule content, we design an EM learning algorithm that treats the unknown rule content as a latent variable. In the E-step, we evaluate the posterior over the latent rule content, and in the M-step, we optimize the rule generator and model parameters by maximizing the expected log-likelihood. Our model has wide-ranging applications in areas such as sports analytics, robotics, and autonomous cars. We demonstrate the model's superior interpretability and prediction performance on both pedestrian and NBA basketball player datasets, achieving promising results. | Discovering Intrinsic Spatial-Temporal Logic Rules to Explain Human Actions | [
"Chengzhi Cao",
"Chao Yang",
"Ruimao Zhang",
"Shuang Li"
] | Conference | poster | 2306.12244 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=arkmhtYLL6 | @inproceedings{
gupta2023concept,
title={Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement},
author={Avani Gupta and Saurabh Saini and P J Narayanan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=arkmhtYLL6}
} | Humans use abstract *concepts* for understanding instead of hard features. Recent interpretability research has focused on human-centered concept explanations of neural networks. Concept Activation Vectors (CAVs) estimate a model's sensitivity and possible biases to a given concept. We extend CAVs from post-hoc analysis to ante-hoc training to reduce model bias through fine-tuning using an additional *Concept Loss*. Concepts are defined on the final layer of the network in the past. We generalize it to intermediate layers, including the last convolution layer. We also introduce *Concept Distillation*, a method to define rich and effective concepts using a pre-trained knowledgeable model as the teacher. Our method can sensitize or desensitize a model towards concepts. We show applications of concept-sensitive training to debias several classification problems. We also show a way to induce prior knowledge into a reconstruction problem. We show that concept-sensitive training can improve model interpretability, reduce biases, and induce prior knowledge. | Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement | [
"Avani Gupta",
"Saurabh Saini",
"P J Narayanan"
] | Conference | poster | 2311.15303 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=apjOYp3mOa | @inproceedings{
lei2023lico,
title={{LICO}: Explainable Models with Language-Image {CO}nsistency},
author={Yiming Lei and Zilong Li and Yangyang Li and Junping Zhang and Hongming Shan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=apjOYp3mOa}
} | Interpreting the decisions of deep learning models has been actively studied since the explosion of deep neural networks. One of the most convincing interpretation approaches is salience-based visual interpretation, such as Grad-CAM, where the generation of attention maps depends merely on categorical labels. Although existing interpretation methods can provide explainable decision clues, they often yield partial correspondence between image and saliency maps due to the limited discriminative information from one-hot labels. This paper develops a Language-Image COnsistency model for explainable image classification, termed LICO, by correlating learnable linguistic prompts with corresponding visual features in a coarse-to-fine manner. Specifically, we first establish a coarse global manifold structure alignment by minimizing the distance between the distributions of image and language features. We then achieve fine-grained saliency maps by applying optimal transport (OT) theory to assign local feature maps with class-specific prompts. Extensive experimental results on eight benchmark datasets demonstrate that the proposed LICO achieves a significant improvement in generating more explainable attention maps in conjunction with existing interpretation methods such as Grad-CAM. Remarkably, LICO improves the classification performance of existing models without introducing any computational overhead during inference. | LICO: Explainable Models with Language-Image COnsistency | [
"Yiming Lei",
"Zilong Li",
"Yangyang Li",
"Junping Zhang",
"Hongming Shan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=apFDDJOYf5 | @inproceedings{
smith2023flowcam,
title={FlowCam: Training Generalizable 3D Radiance Fields without Camera Poses via Pixel-Aligned Scene Flow},
author={Cameron Omid Smith and Yilun Du and Ayush Tewari and Vincent Sitzmann},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=apFDDJOYf5}
} | Reconstruction of 3D neural fields from posed images has emerged as a promising method for self-supervised representation learning. The key challenge preventing the deployment of these 3D scene learners on large-scale video data is their dependence on precise camera poses from structure-from-motion, which is prohibitively expensive to run at scale. We propose a method that jointly reconstructs camera poses and 3D neural scene representations online and in a single forward pass. We estimate poses by first lifting frame-to-frame optical flow to 3D scene flow via differentiable rendering, preserving locality and shift-equivariance of the image processing backbone. SE(3) camera pose estimation is then performed via a weighted least-squares fit to the scene flow field. This formulation enables us to jointly supervise pose estimation and a generalizable neural scene representation via re-rendering the input video, and thus, train end-to-end and fully self-supervised on real-world video datasets. We demonstrate that our method performs robustly on diverse, real-world video, notably on sequences traditionally challenging to optimization-based pose estimation techniques. | FlowCam: Training Generalizable 3D Radiance Fields without Camera Poses via Pixel-Aligned Scene Flow | [
"Cameron Omid Smith",
"Yilun Du",
"Ayush Tewari",
"Vincent Sitzmann"
] | Conference | poster | 2306.00180 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=alLs7EtRJP | @inproceedings{
liang2023factorized,
title={Factorized Contrastive Learning: Going Beyond Multi-view Redundancy},
author={Paul Pu Liang and Zihao Deng and Martin Q. Ma and James Zou and Louis-Philippe Morency and Russ Salakhutdinov},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=alLs7EtRJP}
} | In a wide range of multimodal tasks, contrastive learning has become a particularly appealing approach since it can successfully learn representations from abundant unlabeled data with only pairing information (e.g., image-caption or video-audio pairs). Underpinning these approaches is the assumption of multi-view redundancy - that shared information between modalities is necessary and sufficient for downstream tasks. However, in many real-world settings, task-relevant information is also contained in modality-unique regions: information that is only present in one modality but still relevant to the task. How can we learn self-supervised multimodal representations to capture both shared and unique information relevant to downstream tasks? This paper proposes FactorCL, a new multimodal representation learning method to go beyond multi-view redundancy. FactorCL is built from three new contributions: (1) factorizing task-relevant information into shared and unique representations, (2) capturing task-relevant information via maximizing MI lower bounds and removing task-irrelevant information via minimizing MI upper bounds, and (3) multimodal data augmentations to approximate task relevance without labels. On large-scale real-world datasets, FactorCL captures both shared and unique information and achieves state-of-the-art results on six benchmarks. | Factorized Contrastive Learning: Going Beyond Multi-view Redundancy | [
"Paul Pu Liang",
"Zihao Deng",
"Martin Q. Ma",
"James Zou",
"Louis-Philippe Morency",
"Russ Salakhutdinov"
] | Conference | poster | 2306.05268 | [
"https://github.com/pliang279/factorcl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aky0dKv9ip | @inproceedings{
tian2023decompose,
title={Decompose a Task into Generalizable Subtasks in Multi-Agent Reinforcement Learning},
author={Zikang Tian and Ruizhi Chen and Xing Hu and Ling Li and Rui Zhang and Fan Wu and Shaohui Peng and Jiaming Guo and Zidong Du and Qi Guo and Yunji Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aky0dKv9ip}
} | In recent years, Multi-Agent Reinforcement Learning (MARL) techniques have made significant strides in achieving high asymptotic performance in single task. However, there has been limited exploration of model transferability across tasks. Training a model from scratch for each task can be time-consuming and expensive, especially for large-scale Multi-Agent Systems. Therefore, it is crucial to develop methods for generalizing the model across tasks. Considering that there exist task-independent subtasks across MARL tasks, a model that can decompose such subtasks from the source task could generalize to target tasks. However, ensuring true task-independence of subtasks poses a challenge. In this paper, we propose to \textbf{d}ecompose a \textbf{t}ask in\textbf{to} a series of \textbf{g}eneralizable \textbf{s}ubtasks (DT2GS), a novel framework that addresses this challenge by utilizing a scalable subtask encoder and an adaptive subtask semantic module. We show that these components endow subtasks with two properties critical for task-independence: avoiding overfitting to the source task and maintaining consistent yet scalable semantics across tasks. Empirical results demonstrate that DT2GS possesses sound zero-shot generalization capability across tasks, exhibits sufficient transferability, and outperforms existing methods in both multi-task and single-task problems. | Decompose a Task into Generalizable Subtasks in Multi-Agent Reinforcement Learning | [
"Zikang Tian",
"Ruizhi Chen",
"Xing Hu",
"Ling Li",
"Rui Zhang",
"Fan Wu",
"Shaohui Peng",
"Jiaming Guo",
"Zidong Du",
"Qi Guo",
"Yunji Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ajnThDhuq6 | @inproceedings{
ghiasi2023improving,
title={Improving Robustness with Adaptive Weight Decay},
author={Amin Ghiasi and Ali Shafahi and Reza Ardekani},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ajnThDhuq6}
} | We propose adaptive weight decay, which automatically tunes the hyper-parameter for weight decay during each training iteration. For classification problems, we propose changing the value of the weight decay hyper-parameter on the fly based on the strength of updates from the classification loss (i.e., gradient of cross-entropy), and the regularization loss (i.e., $\ell_2$-norm of the weights). We show that this simple modification can result in large improvements in adversarial robustness — an area which suffers from robust overfitting — without requiring extra data accros various datasets and architecture choices. For example, our reformulation results in 20\% relative robustness improvement for CIFAR-100, and 10\% relative robustness improvement on CIFAR-10 comparing to the best tuned hyper-parameters of traditional weight decay resulting in models that have comparable performance to SOTA robustness methods. In addition, this method has other desirable properties, such as less sensitivity to learning rate, and smaller weight norms, which the latter contributes to robustness to overfitting to label noise, and pruning. | Improving Robustness with Adaptive Weight Decay | [
"Amin Ghiasi",
"Ali Shafahi",
"Reza Ardekani"
] | Conference | poster | 2210.00094 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aig7sgdRfI | @inproceedings{
shah2023learning,
title={Learning Mixtures of Gaussians Using the {DDPM} Objective},
author={Kulin Shah and Sitan Chen and Adam Klivans},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aig7sgdRfI}
} | Recent works have shown that diffusion models can learn essentially any distribution provided one can perform score estimation.
Yet it remains poorly understood under what settings score estimation is possible, let alone when practical gradient-based algorithms for this task can provably succeed.
In this work, we give the first provably efficient results for one of the most fundamental distribution families, Gaussian mixture models.
We prove that GD on the denoising diffusion probabilistic model (DDPM) objective can efficiently recover the ground truth parameters of the mixture model in the following two settings:
1. We show GD with random initialization learns mixtures of two spherical Gaussians in $d$ dimensions with $1/\text{poly}(d)$-separated centers.
2. We show GD with a warm start learns mixtures of $K$ spherical Gaussians with $\Omega(\sqrt{\log(\min(K,d))})$-separated centers.
A key ingredient in our proofs is a new connection between score-based methods and two other approaches to distribution learning, EM and spectral methods. | Learning Mixtures of Gaussians Using the DDPM Objective | [
"Kulin Shah",
"Sitan Chen",
"Adam Klivans"
] | Conference | poster | 2307.01178 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=afKnrwJBAl | @inproceedings{
shi2023crossepisodic,
title={Cross-Episodic Curriculum for Transformer Agents},
author={Lucy Xiaoyang Shi and Yunfan Jiang and Jake Grigsby and Linxi Fan and Yuke Zhu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=afKnrwJBAl}
} | We present a new algorithm, Cross-Episodic Curriculum (CEC), to boost the learning efficiency and generalization of Transformer agents. Central to CEC is the placement of cross-episodic experiences into a Transformer’s context, which forms the basis of a curriculum. By sequentially structuring online learning trials and mixed-quality demonstrations, CEC constructs curricula that encapsulate learning progression and proficiency increase across episodes. Such synergy combined with the potent pattern recognition capabilities of Transformer models delivers a powerful cross-episodic attention mechanism. The effectiveness of CEC is demonstrated under two representative scenarios: one involving multi-task reinforcement learning with discrete control, such as in DeepMind Lab, where the curriculum captures the learning progression in both individual and progressively complex settings; and the other involving imitation learning with mixed-quality data for continuous control, as seen in RoboMimic, where the curriculum captures the improvement in demonstrators' expertise. In all instances, policies resulting from CEC exhibit superior performance and strong generalization. Code is open-sourced on the project website https://cec-agent.github.io/ to facilitate research on Transformer agent learning. | Cross-Episodic Curriculum for Transformer Agents | [
"Lucy Xiaoyang Shi",
"Yunfan Jiang",
"Jake Grigsby",
"Linxi Fan",
"Yuke Zhu"
] | Conference | poster | 2310.08549 | [
""
] | https://huggingface.co/papers/2310.08549 | 0 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=aec58UfBzA | @inproceedings{
mcdonnell2023ranpac,
title={Ran{PAC}: Random Projections and Pre-trained Models for Continual Learning},
author={Mark McDonnell and Dong Gong and Amin Parvaneh and Ehsan Abbasnejad and Anton van den Hengel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aec58UfBzA}
} | Continual learning (CL) aims to incrementally learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones. Most CL works focus on tackling catastrophic forgetting under a learning-from-scratch paradigm. However, with the increasing prominence of foundation models, pre-trained models equipped with informative representations have become available for various downstream requirements. Several CL methods based on pre-trained models have been explored, either utilizing pre-extracted features directly (which makes bridging distribution gaps challenging) or incorporating adaptors (which may be subject to forgetting). In this paper, we propose a concise and effective approach for CL with pre-trained models. Given that forgetting occurs during parameter updating, we contemplate an alternative approach that exploits training-free random projectors and class-prototype accumulation, which thus bypasses the issue. Specifically, we inject a frozen Random Projection layer with nonlinear activation between the pre-trained model's feature representations and output head, which captures interactions between features with expanded dimensionality, providing enhanced linear separability for class-prototype-based CL. We also demonstrate the importance of decorrelating the class-prototypes to reduce the distribution disparity when using pre-trained representations. These techniques prove to be effective and circumvent the problem of forgetting for both class- and domain-incremental continual learning. Compared to previous methods applied to pre-trained ViT-B/16 models, we reduce final error rates by between 20% and 62% on seven class-incremental benchmark datasets, despite not using any rehearsal memory. We conclude that the full potential of pre-trained models for simple, effective, and fast continual learning has not hitherto been fully tapped. Code is available at https://github.com/RanPAC/RanPAC. | RanPAC: Random Projections and Pre-trained Models for Continual Learning | [
"Mark McDonnell",
"Dong Gong",
"Amin Parvaneh",
"Ehsan Abbasnejad",
"Anton van den Hengel"
] | Conference | poster | 2307.02251 | [
"https://github.com/ranpac/ranpac"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=adq0oXb9KM | @inproceedings{
manduchi2023tree,
title={Tree Variational Autoencoders},
author={Laura Manduchi and Moritz Vandenhirtz and Alain Ryser and Julia E Vogt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=adq0oXb9KM}
} | We propose Tree Variational Autoencoder (TreeVAE), a new generative hierarchical clustering model
that learns a flexible tree-based posterior distribution over latent variables. TreeVAE hierarchically divides samples according to their intrinsic characteristics, shedding light on hidden structures in the data. It adapts its architecture to discover the optimal tree for encoding dependencies between latent variables. The proposed tree-based generative architecture enables lightweight conditional inference and improves generative performance by utilizing specialized leaf decoders.
We show that TreeVAE uncovers underlying clusters in the data and finds meaningful hierarchical relations between the different groups on a variety of datasets, including real-world imaging data.
We present empirically that TreeVAE provides a more competitive log-likelihood lower bound than the sequential counterparts.
Finally, due to its generative nature, TreeVAE is able to generate new samples from the discovered clusters via conditional sampling. | Tree Variational Autoencoders | [
"Laura Manduchi",
"Moritz Vandenhirtz",
"Alain Ryser",
"Julia E Vogt"
] | Conference | spotlight | [
"https://github.com/tsa87/semi-jtvae"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ad3JNoR2np | @inproceedings{
zhang2023adapting,
title={Adapting to Continuous Covariate Shift via Online Density Ratio Estimation},
author={Yu-Jie Zhang and Zhen-Yu Zhang and Peng Zhao and Masashi Sugiyama},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ad3JNoR2np}
} | Dealing with distribution shifts is one of the central challenges for modern machine learning. One fundamental situation is the covariate shift, where the input distributions of data change from the training to testing stages while the input-conditional output distribution remains unchanged. In this paper, we initiate the study of a more challenging scenario --- continuous covariate shift --- in which the test data appear sequentially, and their distributions can shift continuously. Our goal is to adaptively train the predictor such that its prediction risk accumulated over time can be minimized. Starting with the importance-weighted learning, we theoretically show the method works effectively if the time-varying density ratios of test and train inputs can be accurately estimated. However, existing density ratio estimation methods would fail due to data scarcity at each time step. To this end, we propose an online density ratio estimation method that can appropriately reuse historical information. Our method is proven to perform well by enjoying a dynamic regret bound, which finally leads to an excess risk guarantee for the predictor. Empirical results also validate the effectiveness. | Adapting to Continuous Covariate Shift via Online Density Ratio Estimation | [
"Yu-Jie Zhang",
"Zhen-Yu Zhang",
"Peng Zhao",
"Masashi Sugiyama"
] | Conference | poster | 2302.02552 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ackajXqei2 | @inproceedings{
hu2023mixed,
title={Mixed Samples as Probes for Unsupervised Model Selection in Domain Adaptation},
author={Dapeng Hu and Jian Liang and Jun Hao Liew and Chuhui Xue and Song Bai and Xinchao Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ackajXqei2}
} | Unsupervised domain adaptation (UDA) has been widely applied in improving model generalization on unlabeled target data. However, accurately selecting the best UDA model for the target domain is challenging due to the absence of labeled target data and domain distribution shifts. Traditional model selection approaches involve training extra models with source data to estimate the target validation risk. Recent studies propose practical methods that are based on measuring various properties of model predictions on target data. Although effective for some UDA models, these methods often lack stability and may lead to poor selections for other UDA models.
In this paper, we present MixVal, an innovative model selection method that operates solely with unlabeled target data during inference. MixVal leverages mixed target samples with pseudo labels to directly probe the learned target structure by each UDA model. Specifically, MixVal employs two distinct types of probes: the intra-cluster mixed samples for evaluating neighborhood density and the inter-cluster mixed samples for investigating the classification boundary. With this comprehensive probing strategy, MixVal elegantly combines the strengths of two state-of-the-art model selection methods, Entropy and SND. We extensively evaluate MixVal on 11 UDA methods across 4 adaptation settings, including classification and segmentation tasks. Experimental results consistently demonstrate that MixVal achieves state-of-the-art performance and maintains exceptional stability in model selection.
Code is available at \url{https://github.com/LHXXHB/MixVal}. | Mixed Samples as Probes for Unsupervised Model Selection in Domain Adaptation | [
"Dapeng Hu",
"Jian Liang",
"Jun Hao Liew",
"Chuhui Xue",
"Song Bai",
"Xinchao Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=aa8KsqfTPa | @inproceedings{
sudhakaran2023mariogpt,
title={Mario{GPT}: Open-Ended Text2Level Generation through Large Language Models},
author={Shyam Sudhakaran and Miguel Gonz{\'a}lez-Duque and Matthias Freiberger and Claire Glanois and Elias Najarro and Sebastian Risi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aa8KsqfTPa}
} | Procedural Content Generation (PCG) is a technique to generate complex and diverse environments in an automated way. However, while generating content with PCG methods is often straightforward, generating meaningful content that reflects specific intentions and constraints remains challenging. Furthermore, many PCG algorithms lack the ability to generate content in an open-ended manner. Recently, Large Language Models (LLMs) have shown to be incredibly effective in many diverse domains. These trained LLMs can be fine-tuned, re-using information and accelerating training for new tasks. Here, we introduce MarioGPT, a fine-tuned GPT2 model trained to generate tile-based game levels, in our case Super Mario Bros levels. MarioGPT can not only generate diverse levels, but can be text-prompted for controllable level generation, addressing one of the key challenges of current PCG techniques. As far as we know, MarioGPT is the first text-to-level model and combined with novelty search it enables the generation of diverse levels with varying play-style dynamics (i.e. player paths) and the open-ended discovery of an increasingly diverse range of content.
Code available at https://github.com/shyamsn97/mario-gpt. | MarioGPT: Open-Ended Text2Level Generation through Large Language Models | [
"Shyam Sudhakaran",
"Miguel González-Duque",
"Matthias Freiberger",
"Claire Glanois",
"Elias Najarro",
"Sebastian Risi"
] | Conference | poster | 2302.05981 | [
"https://github.com/shyamsn97/mario-gpt"
] | https://huggingface.co/papers/2302.05981 | 0 | 0 | 0 | 6 | 1 | [
"shyamsn97/Mario-GPT2-700-context-length"
] | [] | [
"multimodalart/mariogpt",
"Jak12-3/shyamsn97-Mario-GPT2-700-context-length",
"takusan0000/shyamsn97-Mario-GPT2-700-context-length",
"xrdev/Mario"
] |
null | https://openreview.net/forum?id=aZ9hvpnp0k | @inproceedings{
schneider2023anchor,
title={Anchor Data Augmentation},
author={Nora Schneider and Shirin Goshtasbpour and Fernando Perez-Cruz},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aZ9hvpnp0k}
} | We propose a novel algorithm for data augmentation in nonlinear over-parametrized regression. Our data augmentation algorithm borrows from the literature on causality. Contrary to the current state-of-the-art solutions that rely on modifications of Mixup algorithm, we extend the recently proposed distributionally robust Anchor regression (AR) method for data augmentation. Our Anchor Data Augmentation (ADA) uses several replicas of the modified samples in AR to provide more training examples, leading to more robust regression predictions. We apply ADA to linear and nonlinear regression problems using neural networks. ADA is competitive with state-of-the-art C-Mixup solutions. | Anchor Data Augmentation | [
"Nora Schneider",
"Shirin Goshtasbpour",
"Fernando Perez-Cruz"
] | Conference | poster | 2311.06965 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aZ44Na3l9p | @inproceedings{
raff2023reproducibility,
title={Reproducibility in Multiple Instance Learning: A Case For Algorithmic Unit Tests},
author={Edward Raff and James Holt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aZ44Na3l9p}
} | Multiple Instance Learning (MIL) is a sub-domain of classification problems with positive and negative labels and a "bag" of inputs, where the label is positive if and only if a positive element is contained within the bag, and otherwise is negative. Training in this context requires associating the bag-wide label to instance-level information, and implicitly contains a causal assumption and asymmetry to the task (i.e., you can't swap the labels without changing the semantics). MIL problems occur in healthcare (one malignant cell indicates cancer), cyber security (one malicious executable makes an infected computer), and many other tasks. In this work, we examine five of the most prominent deep-MIL models and find that none of them respects the standard MIL assumption. They are able to learn anti-correlated instances, i.e., defaulting to "positive" labels until seeing a negative counter-example, which should not be possible for a correct MIL model. We suspect that enhancements and other works derived from these models will share the same issue. In any context in which these models are being used, this creates the potential for learning incorrect models, which creates risk of operational failure. We identify and demonstrate this problem via a proposed ``algorithmic unit test'', where we create synthetic datasets that can be solved by a MIL respecting model, and which clearly reveal learning that violates MIL assumptions. The five evaluated methods each fail one or more of these tests. This provides a model-agnostic way to identify violations of modeling assumptions, which we hope will be useful for future development and evaluation of MIL models. | Reproducibility in Multiple Instance Learning: A Case For Algorithmic Unit Tests | [
"Edward Raff",
"James Holt"
] | Conference | poster | 2310.17867 | [
""
] | https://huggingface.co/papers/2310.17867 | 0 | 0 | 0 | 2 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=aW9BqtRQkh | @inproceedings{
shi2023language,
title={Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning},
author={Xiaoming Shi and Siqiao Xue and Kangrui Wang and Fan Zhou and James Y. Zhang and JUN ZHOU and Chenhao Tan and Hongyuan Mei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aW9BqtRQkh}
} | Large language models have shown astonishing performance on a wide range of reasoning tasks. In this paper, we investigate whether they could reason about real-world events and help improve the prediction performance of event sequence models. We design LAMP, a framework that integrates a large language model in event prediction. Particularly, the language model performs abductive reasoning to assist an event sequence model: the event model proposes predictions on future events given the past; instructed by a few expert-annotated demonstrations, the language model learns to suggest possible causes for each proposal; a search module finds out the previous events that match the causes; a scoring function learns to examine whether the retrieved events could actually cause the proposal. Through extensive experiments on several challenging real-world datasets, we demonstrate that our framework---thanks to the reasoning capabilities of large language models---could significantly outperform the state-of-the-art event sequence models. | Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning | [
"Xiaoming Shi",
"Siqiao Xue",
"Kangrui Wang",
"Fan Zhou",
"James Y. Zhang",
"JUN ZHOU",
"Chenhao Tan",
"Hongyuan Mei"
] | Conference | poster | 2305.16646 | [
"https://github.com/ilampard/lamp"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aW5bSuduF1 | @inproceedings{
wang2023drift,
title={Drift doesn't Matter: Dynamic Decomposition with Diffusion Reconstruction for Unstable Multivariate Time Series Anomaly Detection},
author={Chengsen Wang and Zirui Zhuang and Qi Qi and Jingyu Wang and Xingyu Wang and Haifeng Sun and Jianxin Liao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aW5bSuduF1}
} | Many unsupervised methods have recently been proposed for multivariate time series anomaly detection. However, existing works mainly focus on stable data yet often omit the drift generated from non-stationary environments, which may lead to numerous false alarms. We propose **D**ynamic **D**ecomposition with **D**iffusion **R**econstruction (D$^3$R), a novel anomaly detection network for real-world unstable data to fill the gap. D$^3$R tackles the drift via decomposition and reconstruction. In the decomposition procedure, we utilize data-time mix-attention to dynamically decompose long-period multivariate time series, overcoming the limitation of the local sliding window. The information bottleneck is critical yet difficult to determine in the reconstruction procedure. To avoid retraining once the bottleneck changes, we control it externally by noise diffusion and directly reconstruct the polluted data. The whole model can be trained end-to-end. Extensive experiments on various real-world datasets demonstrate that D$^3$R significantly outperforms existing methods, with a 11% average relative improvement over the previous SOTA models. | Drift doesn't Matter: Dynamic Decomposition with Diffusion Reconstruction for Unstable Multivariate Time Series Anomaly Detection | [
"Chengsen Wang",
"Zirui Zhuang",
"Qi Qi",
"Jingyu Wang",
"Xingyu Wang",
"Haifeng Sun",
"Jianxin Liao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=aRBa0lSxEB | @inproceedings{
jaghargh2023a,
title={A Dynamical System View of Langevin-Based Non-Convex Sampling},
author={Mohammad Reza Karimi Jaghargh and Ya-Ping Hsieh and Andreas Krause},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aRBa0lSxEB}
} | Non-convex sampling is a key challenge in machine learning, central to non-convex optimization in deep learning as well as to approximate probabilistic inference. Despite its significance, theoretically there remain some important challenges: Existing guarantees suffer from the drawback of lacking guarantees for the last-iterates, and little is known beyond the elementary schemes of stochastic gradient Langevin dynamics. To address these issues, we develop a novel framework that lifts the above issues by harnessing several tools from the theory of dynamical systems. Our key result is that, for a large class of state-of-the-art sampling schemes, their last-iterate convergence in Wasserstein distances can be reduced to the study of their continuous-time counterparts, which is much better understood. Coupled with standard assumptions of MCMC sampling, our theory immediately yields the last-iterate Wasserstein convergence of many advanced sampling schemes such as mirror Langevin, proximal, randomized mid-point, and Runge-Kutta methods. | A Dynamical System View of Langevin-Based Non-Convex Sampling | [
"Mohammad Reza Karimi Jaghargh",
"Ya-Ping Hsieh",
"Andreas Krause"
] | Conference | spotlight | 2210.13867 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aN0llPIbdg | @inproceedings{
ortiz2023scalespace,
title={Scale-Space Hypernetworks for Efficient Biomedical Image Analysis},
author={Jose Javier Gonzalez Ortiz and John Guttag and Adrian V Dalca},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aN0llPIbdg}
} | Convolutional Neural Networks (CNNs) are the predominant model used for a variety of medical image analysis tasks. At inference time, these models are computationally intensive, especially with volumetric data.In principle, it is possible to trade accuracy for computational efficiency by manipulating the rescaling factor in the downsample and upsample layers of CNN architectures.However, properly exploring the accuracy-efficiency trade-off is prohibitively expensive with existing models.To address this, we introduce Scale-Space HyperNetworks (SSHN), a method that learns a spectrum of CNNs with varying internal rescaling factors.A single SSHN characterizes an entire Pareto accuracy-efficiency curve of models that match, and occasionally surpass, the outcomes of training many separate networks with fixed rescaling factors.We demonstrate the proposed approach in several medical image analysis applications, comparing SSHN against strategies with both fixed and dynamic rescaling factors.We find that SSHN consistently provides a better accuracy-efficiency trade-off at a fraction of the training cost. Trained SSHNs enable the user to quickly choose a rescaling factor that appropriately balances accuracy and computational efficiency for their particular needs at inference. | Scale-Space Hypernetworks for Efficient Biomedical Image Analysis | [
"Jose Javier Gonzalez Ortiz",
"John Guttag",
"Adrian V Dalca"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=aMjaEkkXJx | @inproceedings{
geshkovski2023the,
title={The emergence of clusters in self-attention dynamics},
author={Borjan Geshkovski and Cyril Letrouit and Yury Polyanskiy and Philippe Rigollet},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aMjaEkkXJx}
} | Viewing Transformers as interacting particle systems, we describe the geometry of learned representations when the weights are not time-dependent. We show that particles, representing tokens, tend to cluster toward particular limiting objects as time tends to infinity. Using techniques from dynamical systems and partial differential equations, we show that type of limiting object that emerges depends on the spectrum of the value matrix. Additionally, in the one-dimensional case we prove that the self-attention matrix converges to a low-rank Boolean matrix. The combination of these results mathematically confirms the empirical observation made by Vaswani et al. [ VSP`17 ] that leaders appear in a sequence of tokens when processed by Transformers. | The emergence of clusters in self-attention dynamics | [
"Borjan Geshkovski",
"Cyril Letrouit",
"Yury Polyanskiy",
"Philippe Rigollet"
] | Conference | poster | 2305.05465 | [
"https://github.com/borjang/2023-transformers"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aMTiwdK3y8 | @inproceedings{
lee2023fourierhandflow,
title={FourierHandFlow: Neural 4D Hand Representation Using Fourier Query Flow},
author={Jihyun Lee and Junbong Jang and Donghwan Kim and Minhyuk Sung and Tae-Kyun Kim},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aMTiwdK3y8}
} | Recent 4D shape representations model continuous temporal evolution of implicit shapes by (1) learning query flows without leveraging shape and articulation priors or (2) decoding shape occupancies separately for each time value. Thus, they do not effectively capture implicit correspondences between articulated shapes or regularize jittery temporal deformations. In this work, we present FourierHandFlow, which is a spatio-temporally continuous representation for human hands that combines a 3D occupancy field with articulation-aware query flows represented as Fourier series. Given an input RGB sequence, we aim to learn a fixed number of Fourier coefficients for each query flow to guarantee smooth and continuous temporal shape dynamics. To effectively model spatio-temporal deformations of articulated hands, we compose our 4D representation based on two types of Fourier query flow: (1) pose flow that models query dynamics influenced by hand articulation changes via implicit linear blend skinning and (2) shape flow that models query-wise displacement flow. In the experiments, our method achieves state-of-the-art results on video-based 4D reconstruction while being computationally more efficient than the existing 3D/4D implicit shape representations. We additionally show our results on motion inter- and extrapolation and texture transfer using the learned correspondences of implicit shapes. To the best of our knowledge, FourierHandFlow is the first neural 4D continuous hand representation learned from RGB videos. The code will be publicly accessible. | FourierHandFlow: Neural 4D Hand Representation Using Fourier Query Flow | [
"Jihyun Lee",
"Junbong Jang",
"Donghwan Kim",
"Minhyuk Sung",
"Tae-Kyun Kim"
] | Conference | poster | 2307.08100 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aLLuYpn83y | @inproceedings{
li2023inferencetime,
title={Inference-Time Intervention: Eliciting Truthful Answers from a Language Model},
author={Kenneth Li and Oam Patel and Fernanda Vi{\'e}gas and Hanspeter Pfister and Martin Wattenberg},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aLLuYpn83y}
} | We introduce Inference-Time Intervention (ITI), a technique designed to enhance the "truthfulness" of large language models (LLMs). ITI operates by shifting model activations during inference, following a learned set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from $32.5\%$ to $65.1\%$. We identify a tradeoff between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength. ITI is minimally invasive and computationally inexpensive. Moreover, the technique is data efficient: while approaches like RLHF require extensive annotations, ITI locates truthful directions using only few hundred examples. Our findings suggest that LLMs may have an internal representation of the likelihood of something being true, even as they produce falsehoods on the surface. | Inference-Time Intervention: Eliciting Truthful Answers from a Language Model | [
"Kenneth Li",
"Oam Patel",
"Fernanda Viégas",
"Hanspeter Pfister",
"Martin Wattenberg"
] | Conference | spotlight | 2306.03341 | [
"https://github.com/likenneth/honest_llama"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aIpGtPwXny | @inproceedings{
schmied2023learning,
title={Learning to Modulate pre-trained Models in {RL}},
author={Thomas Schmied and Markus Hofmarcher and Fabian Paischer and Razvan Pascanu and Sepp Hochreiter},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aIpGtPwXny}
} | Reinforcement Learning (RL) has been successful in various domains like robotics, game playing, and simulation. While RL agents have shown impressive capabilities in their specific tasks, they insufficiently adapt to new tasks. In supervised learning, this adaptation problem is addressed by large-scale pre-training followed by fine-tuning to new down-stream tasks. Recently, pre-training on multiple tasks has been gaining traction in RL. However, fine-tuning a pre-trained model often suffers from catastrophic forgetting. That is, the performance on the pre-training tasks deteriorates when fine-tuning on new tasks. To investigate the catastrophic forgetting phenomenon, we first jointly pre-train a model on datasets from two benchmark suites, namely Meta-World and DMControl. Then, we evaluate and compare a variety of fine-tuning methods prevalent in natural language processing, both in terms of performance on new tasks, and how well performance on pre-training tasks is retained. Our study shows that with most fine-tuning approaches, the performance on pre-training tasks deteriorates significantly. Therefore, we propose a novel method, Learning-to-Modulate (L2M), that avoids the degradation of learned skills by modulating the information flow of the frozen pre-trained model via a learnable modulation pool. Our method achieves state-of-the-art performance on the Continual-World benchmark, while retaining performance on the pre-training tasks. Finally, to aid future research in this area, we release a dataset encompassing 50 Meta-World and 16 DMControl tasks. | Learning to Modulate pre-trained Models in RL | [
"Thomas Schmied",
"Markus Hofmarcher",
"Fabian Paischer",
"Razvan Pascanu",
"Sepp Hochreiter"
] | Conference | poster | 2306.14884 | [
"https://github.com/ml-jku/l2m"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aIUnoHuENG | @inproceedings{
kelner2023feature,
title={Feature Adaptation for Sparse Linear Regression},
author={Jonathan Kelner and Frederic Koehler and Raghu Meka and Dhruv Rohatgi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aIUnoHuENG}
} | Sparse linear regression is a central problem in high-dimensional statistics. We study the correlated random design setting, where the covariates are drawn from a multivariate Gaussian $N(0,\Sigma)$, and we seek an estimator with small excess risk.
If the true signal is $t$-sparse, information-theoretically, it is possible to achieve strong recovery guarantees with only $O(t\log n)$ samples. However, computationally efficient algorithms have sample complexity linear in (some variant of) the *condition number* of $\Sigma$. Classical algorithms such as the Lasso can require significantly more samples than necessary even if there is only a single sparse approximate dependency among the covariates.
We provide a polynomial-time algorithm that, given $\Sigma$, automatically adapts the Lasso to tolerate a small number of approximate dependencies. In particular, we achieve near-optimal sample complexity for constant sparsity and if $\Sigma$ has few ``outlier'' eigenvalues.
Our algorithm fits into a broader framework of *feature adaptation* for sparse linear regression with ill-conditioned covariates. With this framework, we additionally provide the first polynomial-factor improvement over brute-force search for constant sparsity $t$ and arbitrary covariance $\Sigma$. | Feature Adaptation for Sparse Linear Regression | [
"Jonathan Kelner",
"Frederic Koehler",
"Raghu Meka",
"Dhruv Rohatgi"
] | Conference | spotlight | 2305.16892 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aINqoP32cb | @inproceedings{
cardenas2023csml,
title={{CS}4{ML}: A general framework for active learning with arbitrary data based on Christoffel functions},
author={Juan M Cardenas and Ben Adcock and Nick Dexter},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aINqoP32cb}
} | We introduce a general framework for active learning in regression problems. Our framework extends the standard setup by allowing for general types of data, rather than merely pointwise samples of the target function. This generalization covers many cases of practical interest, such as data acquired in transform domains (e.g., Fourier data), vector-valued data (e.g., gradient-augmented data), data acquired along continuous curves, and, multimodal data (i.e., combinations of different types of measurements). Our framework considers random sampling according to a finite number of sampling measures and arbitrary nonlinear approximation spaces (model classes). We introduce the concept of \textit{generalized Christoffel functions} and show how these can be used to optimize the sampling measures. We prove that this leads to near-optimal sample complexity in various important cases. This paper focuses on applications in scientific computing, where active learning is often desirable, since it is usually expensive to generate data. We demonstrate the efficacy of our framework for gradient-augmented learning with polynomials, Magnetic Resonance Imaging (MRI) using generative models and adaptive sampling for solving PDEs using Physics-Informed Neural Networks (PINNs). | CS4ML: A general framework for active learning with arbitrary data based on Christoffel functions | [
"Juan M Cardenas",
"Ben Adcock",
"Nick Dexter"
] | Conference | spotlight | 2306.00945 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aGZp61S9Lj | @inproceedings{
xu2023enhancing,
title={Enhancing Adaptive History Reserving by Spiking Convolutional Block Attention Module in Recurrent Neural Networks},
author={Qi Xu and Yuyuan Gao and Jiangrong Shen and Yaxin Li and Xuming Ran and Huajin Tang and Gang Pan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aGZp61S9Lj}
} | Spiking neural networks (SNNs) serve as one type of efficient model to process spatio-temporal patterns in time series, such as the Address-Event Representation data collected from Dynamic Vision Sensor (DVS). Although convolutional SNNs have achieved remarkable performance on these AER datasets, benefiting from the predominant spatial feature extraction ability of convolutional structure, they ignore temporal features related to sequential time points. In this paper, we develop a recurrent spiking neural network (RSNN) model embedded with an advanced spiking convolutional block attention module (SCBAM) component to combine both spatial and temporal features of spatio-temporal patterns. It invokes the history information in spatial and temporal channels adaptively through SCBAM, which brings the advantages of efficient memory calling and history redundancy elimination. The performance of our model was evaluated in DVS128-Gesture dataset and other time-series datasets. The experimental results show that the proposed SRNN-SCBAM model makes better use of the history information in spatial and temporal dimensions with less memory space, and achieves higher accuracy compared to other models. | Enhancing Adaptive History Reserving by Spiking Convolutional Block Attention Module in Recurrent Neural Networks | [
"Qi Xu",
"Yuyuan Gao",
"Jiangrong Shen",
"Yaxin Li",
"Xuming Ran",
"Huajin Tang",
"Gang Pan"
] | Conference | poster | 2401.03719 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aG6xOP9QY7 | @inproceedings{
badanidiyuru2023optimal,
title={Optimal Unbiased Randomizers for Regression with Label Differential Privacy},
author={Ashwinkumar Badanidiyuru and Badih Ghazi and Pritish Kamath and Ravi Kumar and Ethan Jacob Leeman and Pasin Manurangsi and Avinash V Varadarajan and Chiyuan Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aG6xOP9QY7}
} | We propose a new family of label randomizers for training _regression_ models under the constraint of label differential privacy (DP). In particular, we leverage the trade-offs between bias and variance to construct better label randomizers depending on a privately estimated prior distribution over the labels. We demonstrate that these randomizers achieve state-of-the-art privacy-utility trade-offs on several datasets, highlighting the importance of reducing bias when training neural networks with label DP. We also provide theoretical results shedding light on the structural properties of the optimal unbiased randomizers. | Optimal Unbiased Randomizers for Regression with Label Differential Privacy | [
"Ashwinkumar Badanidiyuru",
"Badih Ghazi",
"Pritish Kamath",
"Ravi Kumar",
"Ethan Jacob Leeman",
"Pasin Manurangsi",
"Avinash V Varadarajan",
"Chiyuan Zhang"
] | Conference | poster | 2312.05659 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aDLmRMb0K9 | @inproceedings{
stojanovic2023spectral,
title={Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement Learning},
author={Stefan Stojanovic and Yassir Jedra and Alexandre Proutiere},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aDLmRMb0K9}
} | We study matrix estimation problems arising in reinforcement learning with low-rank structure. In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it characterizes the transition kernel of the MDP. In both cases, each entry of the matrix carries important information, and we seek estimation methods with low entry-wise prediction error. Importantly, these methods further need to accommodate for inherent correlations in the available data (e.g. for MDPs, the data consists of system trajectories). We investigate the performance of simple spectral-based matrix estimation approaches: we show that they efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise prediction error. These new results on low-rank matrix estimation make it possible to devise reinforcement learning algorithms that fully exploit the underlying low-rank structure. We provide two examples of such algorithms: a regret minimization algorithm for low-rank bandit problems, and a best policy identification algorithm for low-rank MDPs. Both algorithms yield state-of-the-art performance guarantees. | Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement Learning | [
"Stefan Stojanovic",
"Yassir Jedra",
"Alexandre Proutiere"
] | Conference | poster | 2310.06793 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=aCOKUvqHtD | @inproceedings{
farina2023polynomialtime,
title={Polynomial-Time Linear-Swap Regret Minimization in Imperfect-Information Sequential Games},
author={Gabriele Farina and Charilaos Pipis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=aCOKUvqHtD}
} | No-regret learners seek to minimize the difference between the loss they cumulated through the actions they played, and the loss they would have cumulated in hindsight had they consistently modified their behavior according to some strategy transformation function. The size of the set of transformations considered by the learner determines a natural notion of rationality. As the set of transformations each learner considers grows, the strategies played by the learners recover more complex game-theoretic equilibria, including correlated
equilibria in normal-form games and extensive-form correlated equilibria in extensive-form games. At the extreme, a no-swap-regret agent is one that minimizes regret against the set of all functions from the set of strategies to itself. While it is known that the no-swap-regret condition can be attained efficiently in nonsequential (normal-form) games, understanding what is the strongest notion of rationality that can be attained efficiently in the worst case in sequential (extensive-form) games is a longstanding open problem. In this paper we provide a positive result, by showing that it is possible, in any sequential game, to retain polynomial-time (in the game tree size) iterations while achieving sublinear regret with respect to all linear transformations of the mixed strategy space, a notion called no-linear-swap regret. This notion of hindsight rationality is as strong as no-swap-regret in nonsequential games, and stronger than no-trigger-regret in sequential games—thereby proving the existence of a subset of extensive-form correlated equilibria robust to linear deviations, which we call linear-deviation correlated equilibria, that can be approached efficiently. | Polynomial-Time Linear-Swap Regret Minimization in Imperfect-Information Sequential Games | [
"Gabriele Farina",
"Charilaos Pipis"
] | Conference | poster | 2307.05448 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=a2svOXTVgO | @inproceedings{
mazumder2023on,
title={On the Convergence of {CART} under Sufficient Impurity Decrease Condition},
author={Rahul Mazumder and Haoyue Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=a2svOXTVgO}
} | The decision tree is a flexible machine-learning model that finds its success in numerous applications. It is usually fitted in a recursively greedy manner using CART. In this paper, we study the convergence rate of CART under a regression setting. First, we prove an upper bound on the prediction error of CART under a sufficient impurity decrease (SID) condition \cite{chi2020asymptotic} -- our result is an improvement over the known result by \cite{chi2020asymptotic} under a similar assumption. We show via examples that this error bound cannot be further improved by more than a constant or a log factor. Second, we introduce a few easy-to-check sufficient conditions of the SID condition. In particular, we show that the SID condition can be satisfied by an additive model when the component functions satisfy a ``locally reverse Poincare inequality". We discuss a few familiar function classes in non-parametric estimation to demonstrate the usefulness of this conception. | On the Convergence of CART under Sufficient Impurity Decrease Condition | [
"Rahul Mazumder",
"Haoyue Wang"
] | Conference | poster | 2310.17114 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=a2Yg9Za6Rb | @inproceedings{
jagielski2023students,
title={Students Parrot Their Teachers: Membership Inference on Model Distillation},
author={Matthew Jagielski and Milad Nasr and Katherine Lee and Christopher A. Choquette-Choo and Nicholas Carlini and Florian Tram{\`e}r},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=a2Yg9Za6Rb}
} | Model distillation is frequently proposed as a technique to reduce the privacy leakage of machine learning. These empirical privacy defenses rely on the intuition that distilled ``student'' models protect the privacy of training data, as they only interact with this data indirectly through a ``teacher'' model. In this work, we design membership inference attacks to systematically study the privacy provided by knowledge distillation to both the teacher and student training sets. Our new attacks show that distillation alone provides only limited privacy across a number of domains. We explain the success of our attacks on distillation by showing that membership inference attacks on a private dataset can succeed even if the target model is never queried on any actual training points, but only on inputs whose predictions are highly influenced by training data. Finally, we show that our attacks are strongest when student and teacher sets are similar, or when the attacker can poison the teacher set. | Students Parrot Their Teachers: Membership Inference on Model Distillation | [
"Matthew Jagielski",
"Milad Nasr",
"Katherine Lee",
"Christopher A. Choquette-Choo",
"Nicholas Carlini",
"Florian Tramèr"
] | Conference | oral | 2303.03446 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=a147pIS2Co | @inproceedings{
hoffman2023training,
title={Training Chain-of-Thought via Latent-Variable Inference},
author={Matthew Douglas Hoffman and Du Phan and david dohan and Sholto Douglas and Tuan Anh Le and Aaron T Parisi and Pavel Sountsov and Charles Sutton and Sharad Vikram and Rif A. Saurous},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=a147pIS2Co}
} | Large language models (LLMs) solve problems more accurately and interpretably when instructed to work out the answer step by step using a "chain-of-thought" (CoT) prompt. One can also improve LLMs' performance on a specific task by supervised fine-tuning, i.e., by using gradient ascent on some tunable parameters to maximize the average log-likelihood of correct answers from a labeled training set.
Naively combining CoT with supervised tuning requires supervision not just of the correct answers, but also of detailed rationales that lead to those answers; these rationales are expensive to produce by hand. Instead, we propose a fine-tuning strategy that tries to maximize the \emph{marginal} log-likelihood of generating a correct answer using CoT prompting, approximately averaging over all possible rationales. The core challenge is sampling from the posterior over rationales conditioned on the correct answer; we address it using a simple Markov-chain Monte Carlo (MCMC) expectation-maximization (EM) algorithm inspired by the self-taught reasoner (STaR), memoized wake-sleep, Markovian score climbing, and persistent contrastive divergence. This algorithm also admits a novel control-variate technique that drives the variance of our gradient estimates to zero as the model improves. Applying our technique to GSM8K and the tasks in BIG-Bench Hard, we find that this MCMC-EM fine-tuning technique typically improves the model's accuracy on held-out examples more than STaR or prompt-tuning with or without CoT. | Training Chain-of-Thought via Latent-Variable Inference | [
"Du Phan",
"Matthew Douglas Hoffman",
"david dohan",
"Sholto Douglas",
"Tuan Anh Le",
"Aaron T Parisi",
"Pavel Sountsov",
"Charles Sutton",
"Sharad Vikram",
"Rif A. Saurous"
] | Conference | poster | 2312.02179 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Zyzluw0hC4 | @inproceedings{
pinheiro2023d,
title={3D molecule generation by denoising voxel grids},
author={Pedro O. Pinheiro and Joshua Rackers and joseph Kleinhenz and Michael Maser and Omar Mahmood and Andrew Martin Watkins and Stephen Ra and Vishnu Sresht and Saeed Saremi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Zyzluw0hC4}
} | We propose a new score-based approach to generate 3D molecules represented as atomic densities on regular grids.
First, we train a denoising neural network that learns to map from a smooth distribution of noisy molecules to the distribution of real molecules.
Then, we follow the _neural empirical Bayes_ framework [Saremi and Hyvarinen, 2019] and generate molecules in two steps: (i) sample noisy density grids from a smooth distribution via underdamped Langevin Markov chain Monte Carlo, and (ii) recover the "clean" molecule by denoising the noisy grid with a single step.
Our method, _VoxMol_, generates molecules in a fundamentally different way than the current state of the art (ie, diffusion models applied to atom point clouds). It differs in terms of the data representation, the noise model, the network architecture and the generative modeling algorithm.
Our experiments show that VoxMol captures the distribution of drug-like molecules better than state of the art, while being faster to generate samples. | 3D molecule generation by denoising voxel grids | [
"Pedro O. Pinheiro",
"Joshua Rackers",
"joseph Kleinhenz",
"Michael Maser",
"Omar Mahmood",
"Andrew Martin Watkins",
"Stephen Ra",
"Vishnu Sresht",
"Saeed Saremi"
] | Conference | poster | 2306.07473 | [
"https://github.com/genentech/voxmol"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZwQJRXLjVm | @inproceedings{
qin2023rehearsal,
title={Rehearsal Learning for Avoiding Undesired Future},
author={Tian Qin and Tian-Zuo Wang and Zhi-Hua Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZwQJRXLjVm}
} | Machine learning (ML) models have been widely used to make predictions. Instead of a predictive statement about future outcomes, in many situations we want to pursue a decision: what can we do to avoid the undesired future if an ML model predicts so? In this paper, we present a rehearsal learning framework, in which decisions that can persuasively avoid the happening of undesired outcomes can be found and recommended. Based on the influence relation, we characterize the generative process of variables with structural rehearsal models, consisting of a probabilistic graphical model called rehearsal graphs and structural equations, and find actionable decisions that can alter the outcome by reasoning under a Bayesian framework. Moreover, we present a probably approximately correct bound to quantify the associated risk of a decision. Experiments validate the effectiveness of the proposed rehearsal learning framework and the informativeness of the bound. | Rehearsal Learning for Avoiding Undesired Future | [
"Tian Qin",
"Tian-Zuo Wang",
"Zhi-Hua Zhou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZvDmna23r3 | @inproceedings{
hu2023thought,
title={Thought Cloning: Learning to Think while Acting by Imitating Human Thinking},
author={Shengran Hu and Jeff Clune},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZvDmna23r3}
} | Language is often considered a key aspect of human thinking, providing us with exceptional abilities to generalize, explore, plan, replan, and adapt to new situations. However, Reinforcement Learning (RL) agents are far from human-level performance in any of these abilities. We hypothesize one reason for such cognitive deficiencies is that they lack the benefits of thinking in language and that we can improve AI agents by training them to $\textit{think like humans do}$. We introduce a novel Imitation Learning framework, Thought Cloning, where the idea is to not just clone the behaviors of human demonstrators, $\textit{but also the thoughts humans have as they perform these behaviors}$. While we expect Thought Cloning to truly shine at scale on internet-sized datasets (e.g. online videos with transcripts), here we conduct experiments in a domain where the thinking and action data are synthetically generated. Results reveal that Thought Cloning learns much faster than Behavioral Cloning and its performance advantage grows the further out of distribution test tasks are, highlighting its ability to better handle novel situations. Thought Cloning also provides important benefits for AI Safety and Interpretability, and makes it easier to debug and improve AI. Because we can observe the agent’s thoughts, we can (1) more easily diagnose why things are going wrong, making it easier to fix the problem, (2) steer the agent by correcting its thinking, or (3) prevent it from doing unsafe things it plans to do. Overall, by training agents $\textit{how to think}$ as well as behave, Thought Cloning creates safer, more powerful agents. | Thought Cloning: Learning to Think while Acting by Imitating Human Thinking | [
"Shengran Hu",
"Jeff Clune"
] | Conference | spotlight | 2306.00323 | [
"https://github.com/ShengranHu/Thought-Cloning"
] | https://huggingface.co/papers/2306.00323 | 0 | 0 | 0 | 2 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=ZuaVKlWdD2 | @inproceedings{
wang2023injecting,
title={Injecting Multimodal Information into Rigid Protein Docking via Bi-level Optimization},
author={Ruijia Wang and YiWu Sun and Yujie Luo and Shaochuan Li and Cheng Yang and Xingyi Cheng and Hui Li and Chuan Shi and Le Song},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZuaVKlWdD2}
} | The structure of protein-protein complexes is critical for understanding binding dynamics, biological mechanisms, and intervention strategies. Rigid protein docking, a fundamental problem in this field, aims to predict the 3D structure of complexes from their unbound states without conformational changes. In this scenario, we have access to two types of valuable information: sequence-modal information, such as coevolutionary data obtained from multiple sequence alignments, and structure-modal information, including the 3D conformations of rigid structures. However, existing docking methods typically utilize single-modal information, resulting in suboptimal predictions. In this paper, we propose xTrimoBiDock (or BiDock for short), a novel rigid docking model that effectively integrates sequence- and structure-modal information through bi-level optimization. Specifically, a cross-modal transformer combines multimodal information to predict an inter-protein distance map. To achieve rigid docking, the roto-translation transformation is optimized to align the docked pose with the predicted distance map. In order to tackle this bi-level optimization problem, we unroll the gradient descent of the inner loop and further derive a better initialization for roto-translation transformation based on spectral estimation. Compared to baselines, BiDock achieves a promising result of a maximum 234% relative improvement in challenging antibody-antigen docking problem. | Injecting Multimodal Information into Rigid Protein Docking via Bi-level Optimization | [
"Ruijia Wang",
"YiWu Sun",
"Yujie Luo",
"Shaochuan Li",
"Cheng Yang",
"Xingyi Cheng",
"Hui Li",
"Chuan Shi",
"Le Song"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Zt9RzHjSEy | @inproceedings{
andoni2023differentially,
title={Differentially Private Approximate Near Neighbor Counting in High Dimensions},
author={Alexandr Andoni and Piotr Indyk and Sepideh Mahabadi and Shyam Narayanan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Zt9RzHjSEy}
} | Range counting (e.g., counting the number of data points falling into a given query ball) under differential privacy has been studied extensively. However, the current algorithms for this problem are subject to the following dichotomy. One class of algorithms suffers from an additive error that is a fixed polynomial in the number of points. Another class of algorithms allows for polylogarithmic additive error, but the error grows exponentially in the dimension. To achieve the latter, the problem is relaxed to allow a “fuzzy” definition of the range boundary, e.g., a count of the points in a ball of radius $r$ might also include points in a ball of radius $cr$ for some $c>1$. In this paper we present an efficient algorithm that offers a sweet spot between these two classes. The algorithm has an additive error that is an arbitrary small power of the data set size, depending on how fuzzy the range boundary is, as well as a small ($1+o(1)$) multiplicative error. Crucially, the amount of noise added has no dependence on the dimension. Our algorithm introduces a variant of Locality-Sensitive Hashing, utilizing it in a novel manner. | Differentially Private Approximate Near Neighbor Counting in High Dimensions | [
"Alexandr Andoni",
"Piotr Indyk",
"Sepideh Mahabadi",
"Shyam Narayanan"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZrG8kTbt70 | @inproceedings{
tan2023walklm,
title={Walk{LM}: A Uniform Language Model Fine-tuning Framework for Attributed Graph Embedding},
author={Yanchao Tan and Zihao Zhou and Hang Lv and Weiming Liu and Carl Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZrG8kTbt70}
} | Graphs are widely used to model interconnected entities and improve downstream predictions in various real-world applications. However, real-world graphs nowadays are often associated with complex attributes on multiple types of nodes and even links that are hard to model uniformly, while the widely used graph neural networks (GNNs) often require sufficient training toward specific downstream predictions to achieve strong performance. In this work, we take a fundamentally different approach than GNNs, to simultaneously achieve deep joint modeling of complex attributes and flexible structures of real-world graphs and obtain unsupervised generic graph representations that are not limited to specific downstream predictions. Our framework, built on a natural integration of language models (LMs) and random walks (RWs), is straightforward, powerful and data-efficient. Specifically, we first perform attributed RWs on the graph and design an automated program to compose roughly meaningful textual sequences directly from the attributed RWs; then we fine-tune an LM using the RW-based textual sequences and extract embedding vectors from the LM, which encapsulates both attribute semantics and graph structures. In our experiments, we evaluate the learned node embeddings towards different downstream prediction tasks on multiple real-world attributed graph datasets and observe significant improvements over a comprehensive set of state-of-the-art unsupervised node embedding methods. We believe this work opens a door for more sophisticated technical designs and empirical evaluations toward the leverage of LMs for the modeling of real-world graphs. | WalkLM: A Uniform Language Model Fine-tuning Framework for Attributed Graph Embedding | [
"Yanchao Tan",
"Zihao Zhou",
"Hang Lv",
"Weiming Liu",
"Carl Yang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZqSx5vXOgC | @inproceedings{
alman2023bypass,
title={Bypass Exponential Time Preprocessing: Fast Neural Network Training via Weight-Data Correlation Preprocessing},
author={Josh Alman and Jiehao Liang and Zhao Song and Ruizhe Zhang and Danyang Zhuo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZqSx5vXOgC}
} | Over the last decade, deep neural networks have transformed our society, and they are already widely applied in various machine learning applications. State-of-the-art deep neural networks are becoming larger in size every year to deliver increasing model accuracy, and as a result, model training consumes substantial computing resources and will only consume more in the future.
Using current training methods, in each iteration, to process a data point $x \in \mathbb{R}^d$ in a layer, we need to spend $\Theta(md)$ time to evaluate all the $m$ neurons in the layer. This means processing the entire layer takes $\Theta(nmd)$ time for $n$ data points. Recent work [Song, Yang and Zhang, NeurIPS 2021] reduces this time per iteration to $o(nmd)$, but requires exponential time to preprocess either the data or the neural network weights, making it unlikely to have practical usage.
In this work, we present a new preprocessing method that simply stores the weight-data correlation in a tree data structure in order to quickly and dynamically detect which neurons fire at each iteration. Our method requires only $O(nmd)$ time in preprocessing and still achieves $o(nmd)$ time per iteration. We complement our new algorithm with a lower bound, proving that assuming a popular conjecture from complexity theory, one could not substantially speed up our algorithm for dynamic detection of firing neurons. | Bypass Exponential Time Preprocessing: Fast Neural Network Training via Weight-Data Correlation Preprocessing | [
"Josh Alman",
"Jiehao Liang",
"Zhao Song",
"Ruizhe Zhang",
"Danyang Zhuo"
] | Conference | poster | 2211.14227 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Znpz1sv4IP | @inproceedings{
jiang2023se,
title={{SE}(3) Diffusion Model-based Point Cloud Registration for Robust 6D Object Pose Estimation},
author={Haobo Jiang and Mathieu Salzmann and Zheng Dang and Jin Xie and Jian Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Znpz1sv4IP}
} | In this paper, we introduce an SE(3) diffusion model-based point cloud registration framework for 6D object pose estimation in real-world scenarios. Our approach formulates the 3D registration task as a denoising diffusion process, which progressively refines the pose of the source point cloud to obtain a precise alignment with the model point cloud. Training our framework involves two operations: An SE(3) diffusion process and an SE(3) reverse process. The SE(3) diffusion process gradually perturbs the optimal rigid transformation of a pair of point clouds by continuously injecting noise (perturbation transformation). By contrast, the SE(3) reverse process focuses on learning a denoising network that refines the noisy transformation step-by-step, bringing it closer to the optimal transformation for accurate pose estimation. Unlike standard diffusion models used in linear Euclidean spaces, our diffusion model operates on the SE(3) manifold. This requires exploiting the linear Lie algebra $\mathfrak{se}(3)$ associated with SE(3) to constrain the transformation transitions during the diffusion and reverse processes. Additionally, to effectively train our denoising network, we derive a registration-specific variational lower bound as the optimization objective for model learning. Furthermore, we show that our denoising network can be constructed with a surrogate registration model, making our approach applicable to different deep registration networks. Extensive experiments demonstrate that our diffusion registration framework presents outstanding pose estimation performance on the real-world TUD-L, LINEMOD, and Occluded-LINEMOD datasets. | SE(3) Diffusion Model-based Point Cloud Registration for Robust 6D Object Pose Estimation | [
"Haobo Jiang",
"Mathieu Salzmann",
"Zheng Dang",
"Jin Xie",
"Jian Yang"
] | Conference | poster | 2310.17359 | [
""
] | https://huggingface.co/papers/2310.17359 | 0 | 1 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=ZmeAoWQqe0 | @inproceedings{
li2023time,
title={Time Series as Images: Vision Transformer for Irregularly Sampled Time Series},
author={Zekun Li and Shiyang Li and Xifeng Yan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZmeAoWQqe0}
} | Irregularly sampled time series are increasingly prevalent, particularly in medical domains. While various specialized methods have been developed to handle these irregularities, effectively modeling their complex dynamics and pronounced sparsity remains a challenge.
This paper introduces a novel perspective by converting irregularly sampled time series into line graph images, then utilizing powerful pre-trained vision transformers for time series classification in the same way as image classification. This method not only largely simplifies specialized algorithm designs but also presents the potential to serve as a universal framework for time series modeling. Remarkably, despite its simplicity, our approach outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets. Especially in the rigorous leave-sensors-out setting where a portion of variables is omitted during testing, our method exhibits strong robustness against varying degrees of missing observations, achieving an impressive improvement of 42.8% in absolute F1 score points over leading specialized baselines even with half the variables masked. Code and data are available at https://github.com/Leezekun/ViTST. | Time Series as Images: Vision Transformer for Irregularly Sampled Time Series | [
"Zekun Li",
"Shiyang Li",
"Xifeng Yan"
] | Conference | poster | 2303.12799 | [
"https://github.com/leezekun/vitst"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZmSg4f16uo | @inproceedings{
flennerhag2023optimistic,
title={Optimistic Meta-Gradients},
author={Sebastian Flennerhag and Tom Zahavy and Brendan O'Donoghue and Hado van Hasselt and Andr{\'a}s Gy{\"o}rgy and Satinder Singh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZmSg4f16uo}
} | We study the connection between gradient-based meta-learning and convex optimisation. We observe that gradient descent with momentum is a special case of meta-gradients, and building on recent results in optimisation, we prove convergence rates for meta learning in the single task setting. While a meta-learned update rule can yield faster convergence up to constant factor, it is not sufficient for acceleration. Instead, some form of optimism is required. We show that optimism in meta-learning can be captured through the recently proposed Bootstrapped Meta-Gradient (Flennerhag et. al., 2022) method, providing deeper insight into its underlying mechanics. | Optimistic Meta-Gradients | [
"Sebastian Flennerhag",
"Tom Zahavy",
"Brendan O'Donoghue",
"Hado van Hasselt",
"András György",
"Satinder Singh"
] | Conference | poster | 2301.03236 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Zi1KKzh5Aj | @inproceedings{
zeng2023collapsed,
title={Collapsed Inference for Bayesian Deep Learning},
author={Zhe Zeng and Guy Van den Broeck},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Zi1KKzh5Aj}
} | Bayesian neural networks (BNNs) provide a formalism to quantify and calibrate uncertainty in deep learning. Current inference approaches for BNNs often resort to few-sample estimation for scalability, which can harm predictive performance, while its alternatives tend to be computationally prohibitively expensive. We tackle this challenge by revealing a previously unseen connection between inference on BNNs and volume computation problems. With this observation, we introduce a novel collapsed inference scheme that performs Bayesian model averaging using collapsed samples. It improves over a Monte-Carlo sample by limiting sampling to a subset of the network weights while pairing it with some closed-form conditional distribution over the rest. A collapsed sample represents uncountably many models drawn from the approximate posterior and thus yields higher sample efficiency. Further, we show that the marginalization of a collapsed sample can be solved analytically and efficiently despite the non-linearity of neural networks by leveraging existing volume computation solvers. Our proposed use of collapsed samples achieves a balance between scalability and accuracy. On various regression and classification tasks, our collapsed Bayesian deep learning approach demonstrates significant improvements over existing methods and sets a new state of the art in terms of uncertainty estimation as well as predictive performance. | Collapsed Inference for Bayesian Deep Learning | [
"Zhe Zeng",
"Guy Van den Broeck"
] | Conference | poster | 2306.09686 | [
"https://github.com/ucla-starai/ciber"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZgVJvaAS2h | @inproceedings{
zhang2023a,
title={A Unified Conditional Framework for Diffusion-based Image Restoration},
author={Yi Zhang and Xiaoyu Shi and Dasong Li and Xiaogang Wang and Jian Wang and Hongsheng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZgVJvaAS2h}
} | Diffusion Probabilistic Models (DPMs) have recently shown remarkable performance in image generation tasks, which are capable of generating highly realistic images. When adopting DPMs for image restoration tasks, the crucial aspect lies in how to integrate the conditional information to guide the DPMs to generate accurate and natural output, which has been largely overlooked in existing works. In this paper, we present a unified conditional framework based on diffusion models for image restoration. We leverage a lightweight UNet to predict initial guidance and the diffusion model to learn the residual of the guidance. By carefully designing the basic module and integration module for the diffusion model block, we integrate the guidance and other auxiliary conditional information into every block of the diffusion model to achieve spatially-adaptive generation conditioning. To handle high-resolution images, we propose a simple yet effective inter-step patch-splitting strategy to produce arbitrary-resolution images without grid artifacts. We evaluate our conditional framework on three challenging tasks: extreme low-light denoising, deblurring, and JPEG restoration, demonstrating its significant improvements in perceptual quality and the generalization to restoration tasks. The code will be released at https://zhangyi-3.github.io/project/UCDIR/. | A Unified Conditional Framework for Diffusion-based Image Restoration | [
"Yi Zhang",
"Xiaoyu Shi",
"Dasong Li",
"Xiaogang Wang",
"Jian Wang",
"Hongsheng Li"
] | Conference | poster | 2305.20049 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZfFR4d5gUM | @inproceedings{
marion2023leveraging,
title={Leveraging the two-timescale regime to demonstrate convergence of neural networks},
author={Pierre Marion and Rapha{\"e}l Berthier},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZfFR4d5gUM}
} | We study the training dynamics of shallow neural networks, in a two-timescale regime in which the stepsizes for the inner layer are much smaller than those for the outer layer. In this regime, we prove convergence of the gradient flow to a global optimum of the non-convex optimization problem in a simple univariate setting. The number of neurons need not be asymptotically large for our result to hold, distinguishing our result from popular recent approaches such as the neural tangent kernel or mean-field regimes. Experimental illustration is provided, showing that the stochastic gradient descent behaves according to our description of the gradient flow and thus converges to a global optimum in the two-timescale regime, but can fail outside of this regime. | Leveraging the two-timescale regime to demonstrate convergence of neural networks | [
"Pierre Marion",
"Raphaël Berthier"
] | Conference | poster | [
"https://github.com/PierreMarion23/two-timescale-nn"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZejTutd7VY | @inproceedings{
xue2023trojllm,
title={Troj{LLM}: A Black-box Trojan Prompt Attack on Large Language Models},
author={Jiaqi Xue and Mengxin Zheng and Ting Hua and Yilin Shen and Yepeng Liu and Ladislau B{\"o}l{\"o}ni and Qian Lou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZejTutd7VY}
} | Large Language Models (LLMs) are progressively being utilized as machine learning services and interface tools for various applications. However, the security implications of LLMs, particularly in relation to adversarial and Trojan attacks, remain insufficiently examined. In this paper, we propose TrojLLM, an automatic and black-box framework to effectively generate universal and stealthy triggers. When these triggers are incorporated into the input data, the LLMs' outputs can be maliciously manipulated. Moreover, the framework also supports embedding Trojans within discrete prompts, enhancing the overall effectiveness and precision of the triggers' attacks. Specifically, we propose a trigger discovery algorithm for generating universal triggers for various inputs by querying victim LLM-based APIs using few-shot data samples. Furthermore, we introduce a novel progressive Trojan poisoning algorithm designed to generate poisoned prompts that retain efficacy and transferability across a diverse range of models. Our experiments and results demonstrate TrojLLM's capacity to effectively insert Trojans into text prompts in real-world black-box LLM APIs including GPT-3.5 and GPT-4, while maintaining exceptional performance on clean test sets. Our work sheds light on the potential security risks in current models and offers a potential defensive approach. The source code of TrojLLM is available at https://github.com/UCF-ML-Research/TrojLLM. | TrojLLM: A Black-box Trojan Prompt Attack on Large Language Models | [
"Jiaqi Xue",
"Mengxin Zheng",
"Ting Hua",
"Yilin Shen",
"Yepeng Liu",
"Ladislau Bölöni",
"Qian Lou"
] | Conference | poster | 2306.06815 | [
"https://github.com/ucf-ml-research/trojllm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZeRiLBvIps | @inproceedings{
wang2023local,
title={Local Convergence of Gradient Methods for Min-Max Games: Partial Curvature Generically Suffices},
author={Guillaume Wang and L{\'e}na{\"\i}c Chizat},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZeRiLBvIps}
} | We study the convergence to local Nash equilibria of gradient methods for two-player zero-sum differentiable games.
It is well-known that, in the continuous-time setting, such dynamics converge locally when $S \succ 0$ and may diverge when $S=0$, where $S\succeq 0$ is the symmetric part of the Jacobian at equilibrium that accounts for the "potential" component of the game. We show that these dynamics also converge as soon as $S$ is nonzero (*partial curvature*) and the eigenvectors of the antisymmetric part $A$ are in general position with respect to the kernel of $S$.
We then study the convergence rate when $S \ll A$ and prove that it typically depends on the *average* of the eigenvalues of $S$, instead of the minimum as an analogy with minimization problems would suggest.
To illustrate our results, we consider the problem of computing mixed Nash equilibria of continuous games. We show that, thanks to partial curvature, conic particle methods -- which optimize over both weights and supports of the mixed strategies -- generically converge faster than fixed-support methods.
For min-max games, it is thus beneficial to add degrees of freedom "with curvature": this can be interpreted as yet another benefit of over-parameterization. | Local Convergence of Gradient Methods for Min-Max Games: Partial Curvature Generically Suffices | [
"Guillaume Wang",
"Lénaïc Chizat"
] | Conference | poster | 2305.17275 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZdxGmJGKOo | @inproceedings{
yang2023simfbo,
title={Sim{FBO}: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning},
author={Yifan Yang and Peiyao Xiao and Kaiyi Ji},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZdxGmJGKOo}
} | Federated bilevel optimization (FBO) has shown great potential recently in machine learning and edge computing due to the emerging nested optimization structure in meta-learning, fine-tuning, hyperparameter tuning, etc. However, existing FBO algorithms often involve complicated computations and require multiple sub-loops per iteration, each of which contains a number of communication rounds. In this paper, we propose a simple and flexible FBO framework named SimFBO, which is easy to implement without sub-loops, and includes a generalized server-side aggregation and update for improving communication efficiency. We further propose System-level heterogeneity robust FBO (ShroFBO) as a variant of SimFBO with stronger resilience to heterogeneous local computation. We show that SimFBO and ShroFBO provably achieve a linear convergence speedup with partial client participation and client sampling without replacement, as well as improved sample and communication complexities. Experiments demonstrate the effectiveness of the proposed methods over existing FBO algorithms. | SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning | [
"Yifan Yang",
"Peiyao Xiao",
"Kaiyi Ji"
] | Conference | spotlight | 2305.19442 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZcuFDaMTYw | @inproceedings{
vuursteen2023optimal,
title={Optimal testing using combined test statistics across independent studies},
author={Lasse Vuursteen and Botond Szabo and Aad van der Vaart and Harry van Zanten},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZcuFDaMTYw}
} | Combining test statistics from independent trials or experiments is a popular method of meta-analysis. However, there is very limited theoretical understanding of the power of the combined test, especially in high-dimensional models considering composite hypotheses tests. We derive a mathematical framework to study standard {meta-analysis} testing approaches in the context of the many normal means model, which serves as the platform to investigate more complex models.
We introduce a natural and mild restriction on the meta-level combination functions of the local trials. This allows us to mathematically quantify the cost of compressing $m$ trials into real-valued test statistics and combining these. We then derive minimax lower and matching upper bounds for the separation rates of standard combination methods for e.g. p-values and e-values, quantifying the loss relative to using the full, pooled data. We observe an elbow effect, revealing that in certain cases combining the locally optimal tests in each trial results in a sub-optimal {meta-analysis} method and develop approaches to achieve the global optima. We also explore the possible gains of allowing limited coordination between the trial designs. Our results connect meta-analysis with bandwidth constraint distributed inference and build on recent information theoretic developments in the latter field. | Optimal testing using combined test statistics across independent studies | [
"Lasse Vuursteen",
"Botond Szabo",
"Aad van der Vaart",
"Harry van Zanten"
] | Conference | poster | 2310.19541 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZcJa1R6j3v | @inproceedings{
zhang2023large,
title={Large Language Models Are Semi-Parametric Reinforcement Learning Agents},
author={Danyang Zhang and Lu Chen and Situo Zhang and Hongshen Xu and Zihan Zhao and Kai Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZcJa1R6j3v}
} | Inspired by the insights in cognitive science with respect to human memory and reasoning mechanism, a novel evolvable LLM-based (Large Language Model) agent framework is proposed as Rememberer. By equipping the LLM with a long-term experience memory, Rememberer is capable of exploiting the experiences from the past episodes even for different task goals, which excels an LLM-based agent with fixed exemplars or equipped with a transient working memory. We further introduce **R**einforcement **L**earning with **E**xperience **M**emory (**RLEM**) to update the memory. Thus, the whole system can learn from the experiences of both success and failure, and evolve its capability without fine-tuning the parameters of the LLM. In this way, the proposed Rememberer constitutes a semi-parametric RL agent. Extensive experiments are conducted on two RL task sets to evaluate the proposed framework. The average results with different initialization and training sets exceed the prior SOTA by 4% and 2% for the success rate on two task sets and demonstrate the superiority and robustness of Rememberer. | Large Language Models Are Semi-Parametric Reinforcement Learning Agents | [
"Danyang Zhang",
"Lu Chen",
"Situo Zhang",
"Hongshen Xu",
"Zihan Zhao",
"Kai Yu"
] | Conference | poster | 2306.07929 | [
"https://github.com/opendfm/rememberer"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZZgfS1DbmO | @inproceedings{
luo2023continuous,
title={Continuous Parametric Optical Flow},
author={Jianqin Luo and Zhexiong Wan and yuxin mao and Bo Li and Yuchao Dai},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZZgfS1DbmO}
} | In this paper, we present continuous parametric optical flow, a parametric representation of dense and continuous motion over arbitrary time interval. In contrast to existing discrete-time representations (i.e., flow in between consecutive frames), this new representation transforms the frame-to-frame pixel correspondences to dense continuous flow. In particular, we present a temporal-parametric model that employs B-splines to fit point trajectories using a limited number of frames. To further improve the stability and robustness of the trajectories, we also add an encoder with a neural ordinary differential equation (NODE) to represent features associated with specific times. We also contribute a synthetic dataset and introduce two evaluation perspectives to measure the accuracy and robustness of continuous flow estimation. Benefiting from the combination of explicit parametric modeling and implicit feature optimization, our model focuses on motion continuity and outperforms the flow-based and point-tracking approaches for fitting long-term and variable sequences. | Continuous Parametric Optical Flow | [
"Jianqin Luo",
"Zhexiong Wan",
"yuxin mao",
"Bo Li",
"Yuchao Dai"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZZWg9jJQ1j | @inproceedings{
ha2023generalizable,
title={Generalizable Lightweight Proxy for Robust {NAS} against Diverse Perturbations},
author={Hyeonjeong Ha and Minseon Kim and Sung Ju Hwang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZZWg9jJQ1j}
} | Recent neural architecture search (NAS) frameworks have been successful in finding optimal architectures for given conditions (e.g., performance or latency). However, they search for optimal architectures in terms of their performance on clean images only, while robustness against various types of perturbations or corruptions is crucial in practice. Although there exist several robust NAS frameworks that tackle this issue by integrating adversarial training into one-shot NAS, however, they are limited in that they only consider robustness against adversarial attacks and require significant computational resources to discover optimal architectures for a single task, which makes them impractical in real-world scenarios. To address these challenges, we propose a novel lightweight robust zero-cost proxy that considers the consistency across features, parameters, and gradients of both clean and perturbed images at the initialization state. Our approach facilitates an efficient and rapid search for neural architectures capable of learning generalizable features that exhibit robustness across diverse perturbations. The experimental results demonstrate that our proxy can rapidly and efficiently search for neural architectures that are consistently robust against various perturbations on multiple benchmark datasets and diverse search spaces, largely outperforming existing clean zero-shot NAS and robust NAS with reduced search cost. | Generalizable Lightweight Proxy for Robust NAS against Diverse Perturbations | [
"Hyeonjeong Ha",
"Minseon Kim",
"Sung Ju Hwang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZZS9WEWYbD | @inproceedings{
abel2023a,
title={A Definition of Continual Reinforcement Learning},
author={David Abel and Andre Barreto and Benjamin Van Roy and Doina Precup and Hado van Hasselt and Satinder Singh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZZS9WEWYbD}
} | In a standard view of the reinforcement learning problem, an agent’s goal is to efficiently identify a policy that maximizes long-term reward. However, this perspective is based on a restricted view of learning as finding a solution, rather than treating learning as endless adaptation. In contrast, continual reinforcement learning refers to the setting in which the best agents never stop learning. Despite the importance of continual reinforcement learning, the community lacks a simple definition of the problem that highlights its commitments and makes its primary concepts precise and clear. To this end, this paper is dedicated to carefully defining the continual reinforcement learning problem. We formalize the notion of agents that “never stop learning” through a new mathematical language for analyzing and cataloging agents. Using this new language, we define a continual learning agent as one that can be understood as carrying out an implicit search process indefinitely, and continual reinforcement learning as the setting in which the best agents are all continual learning agents. We provide two motivating examples, illustrating that traditional views of multi-task reinforcement learning and continual supervised learning are special cases of our definition. Collectively, these definitions and perspectives formalize many intuitive concepts at the heart of learning, and open new research pathways surrounding continual learning agents. | A Definition of Continual Reinforcement Learning | [
"David Abel",
"Andre Barreto",
"Benjamin Van Roy",
"Doina Precup",
"Hado van Hasselt",
"Satinder Singh"
] | Conference | poster | 2307.11046 | [
""
] | https://huggingface.co/papers/2307.11046 | 0 | 0 | 0 | 6 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=ZXbgVm3PSt | @inproceedings{
bhatia2023tart,
title={{TART}: A plug-and-play Transformer module for task-agnostic reasoning},
author={Kush Bhatia and Avanika Narayan and Christopher De Sa and Christopher Re},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZXbgVm3PSt}
} | Large language models (LLMs) exhibit in-context learning abilities which enable the same model to perform several tasks without any task-specific training. In contrast, traditional adaptation approaches, such as fine-tuning, modify the underlying models for each specific task. In-context learning, however, consistently underperforms task-specific tuning approaches even when presented with the same examples. While most existing approaches (e.g., prompt engineering) focus on the LLM's learned representations to patch this performance gap, our experiments actually reveal that LLM representations contain sufficient information to make good predictions. As such, we focus on the LLM's reasoning abilities and demonstrate that this performance gap exists due to their inability to perform simple probabilistic reasoning tasks. This raises an intriguing question: Are LLMs actually capable of learning how to reason in a task-agnostic manner? We answer this in the affirmative and, as a proof of concept, propose TART which generically improves an LLM's reasoning abilities using a synthetically trained reasoning module. TART trains this Transformer-based reasoning module in a task-agnostic manner using only synthetic logistic regression tasks and composes it with an arbitrary real-world pre-trained model without any additional training. With a single inference module, TART improves performance across different model families (GPT-Neo, Pythia, Bloom), model sizes (100M - 6B), tasks (14 NLP classification tasks), and even across different modalities (audio and vision). On the RAFT Benchmark, TART improves GPT-Neo (125M)'s performance such that it outperforms Bloom (176B), and is within $4$% of GPT-3. | TART: A plug-and-play Transformer module for task-agnostic reasoning | [
"Kush Bhatia",
"Avanika Narayan",
"Christopher De Sa",
"Christopher Re"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZViPzk1sUI | @inproceedings{
jambulapati2023structured,
title={Structured Semidefinite Programming for Recovering Structured Preconditioners},
author={Arun Jambulapati and Jerry Li and Christopher Musco and Kirankumar Shiragur and Aaron Sidford and Kevin Tian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZViPzk1sUI}
} | We develop a general framework for finding approximately-optimal preconditioners for solving linear systems. Leveraging this framework we obtain improved runtimes for fundamental preconditioning and linear system solving problems including:
Diagonal preconditioning. We give an algorithm which, given positive definite $\mathbf{K} \in \mathbb{R}^{d \times d}$ with $\mathrm{nnz}(\mathbf{K})$ nonzero entries, computes an $\epsilon$-optimal diagonal preconditioner in time $\widetilde{O}(\mathrm{nnz}(\mathbf{K}) \cdot \mathrm{poly}(\kappa^\star,\epsilon^{-1}))$, where $\kappa^\star$ is the optimal condition number of the rescaled matrix.
Structured linear systems. We give an algorithm which, given $\mathbf{M} \in \mathbb{R}^{d \times d}$ that is either the pseudoinverse of a graph Laplacian matrix or a constant spectral approximation of one, solves linear systems in $\mathbf{M}$ in $\widetilde{O}(d^2)$ time.
Our diagonal preconditioning results improve state-of-the-art runtimes of $\Omega(d^{3.5})$ attained by general-purpose semidefinite programming, and our solvers improve state-of-the-art runtimes of $\Omega(d^{\omega})$ where $\omega > 2.3$ is the current matrix multiplication constant. We attain our results via new algorithms for a class of semidefinite programs (SDPs) we call matrix-dictionary approximation SDPs, which we leverage to solve an associated problem we call matrix-dictionary recovery. | Structured Semidefinite Programming for Recovering Structured Preconditioners | [
"Arun Jambulapati",
"Jerry Li",
"Christopher Musco",
"Kirankumar Shiragur",
"Aaron Sidford",
"Kevin Tian"
] | Conference | poster | 2310.18265 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZVRG3toCTT | @inproceedings{
yuksekgonul2023beyond,
title={Beyond Confidence: Reliable Models Should Also Consider Atypicality},
author={Mert Yuksekgonul and Linjun Zhang and James Zou and Carlos Guestrin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZVRG3toCTT}
} | While most machine learning models can provide confidence in their predictions, confidence is insufficient to understand a prediction's reliability. For instance, the model may have a low confidence prediction if the input is not well-represented in the training dataset or if the input is inherently ambiguous. In this work, we investigate the relationship between how atypical~(rare) a sample or a class is and the reliability of a model's predictions. We first demonstrate that atypicality is strongly related to miscalibration and accuracy. In particular, we empirically show that predictions for atypical inputs or atypical classes are more overconfident and have lower accuracy. Using these insights, we show incorporating atypicality improves uncertainty quantification and model performance for discriminative neural networks and large language models. In a case study, we show that using atypicality improves the performance of a skin lesion classifier across different skin tone groups without having access to the group attributes. Overall, we propose that models should use not only confidence but also atypicality to improve uncertainty quantification and performance. Our results demonstrate that simple post-hoc atypicality estimators can provide significant value. | Beyond Confidence: Reliable Models Should Also Consider Atypicality | [
"Mert Yuksekgonul",
"Linjun Zhang",
"James Zou",
"Carlos Guestrin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZULq9QV8rH | @inproceedings{
mialon2023selfsupervised,
title={Self-Supervised Learning with Lie Symmetries for Partial Differential Equations},
author={Gr{\'e}goire Mialon and Quentin Garrido and Hannah Lawrence and Danyal Rehman and Yann LeCun and Bobak Kiani},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZULq9QV8rH}
} | Machine learning for differential equations paves the way for computationally efficient alternatives to numerical solvers, with potentially broad impacts in science and engineering. Though current algorithms typically require simulated training data tailored to a given setting, one may instead wish to learn useful information from heterogeneous sources, or from real dynamical systems observations that are messy or incomplete. In this work, we learn general-purpose representations of PDEs from heterogeneous data by implementing joint embedding methods for self-supervised learning (SSL), a framework for unsupervised representation learning that has had notable success in computer vision. Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers. We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs. | Self-Supervised Learning with Lie Symmetries for Partial Differential Equations | [
"Grégoire Mialon",
"Quentin Garrido",
"Hannah Lawrence",
"Danyal Rehman",
"Yann LeCun",
"Bobak Kiani"
] | Conference | poster | 2307.05432 | [
"https://github.com/facebookresearch/sslforpdes"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZRBGwpeewz | @inproceedings{
jambulapati2023revisiting,
title={Revisiting Area Convexity: Faster Box-Simplex Games and Spectrahedral Generalizations},
author={Arun Jambulapati and Kevin Tian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZRBGwpeewz}
} | We investigate area convexity [Sherman17], a mysterious tool introduced to tackle optimization problems under the challenging $\ell_\infty$ geometry. We develop a deeper understanding of its relationship with conventional analyses of extragradient methods [Nemirovski04, Nesterov07]. We also give improved solvers for the subproblems required by variants of the [Sherman17] algorithm, designed through the lens of relative smoothness [BBT17, LFN18}.
Leveraging these new tools, we give a state-of-the-art first-order algorithm for solving box-simplex games (a primal-dual formulation of $\ell_\infty$ regression) in a $d \times n$ matrix with bounded rows, using $O(\log d \cdot \epsilon^{-1})$ matrix-vector queries. As a consequence, we obtain improved complexities for approximate maximum flow, optimal transport, min-mean-cycle, and other basic combinatorial optimization problems. We also develop a near-linear time algorithm for a matrix generalization of box-simplex games, capturing a family of problems closely related to semidefinite programs recently used as subroutines in robust statistics and numerical linear algebra. | Revisiting Area Convexity: Faster Box-Simplex Games and Spectrahedral Generalizations | [
"Arun Jambulapati",
"Kevin Tian"
] | Conference | poster | 2303.15627 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZQzm0Z47jz | @inproceedings{
lee2023rethinking,
title={Rethinking the Role of Token Retrieval in Multi-Vector Retrieval},
author={Jinhyuk Lee and Zhuyun Dai and Sai Meher Karthik Duddu and Tao Lei and Iftekhar Naim and Ming-Wei Chang and Vincent Y Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZQzm0Z47jz}
} | Multi-vector retrieval models such as ColBERT [Khattab et al., 2020] allow token-level interactions between queries and documents, and hence achieve state of the art on many information retrieval benchmarks. However, their non-linear scoring function cannot be scaled to millions of documents, necessitating a three-stage process for inference: retrieving initial candidates via token retrieval, accessing all token vectors, and scoring the initial candidate documents. The non-linear scoring function is applied over all token vectors of each candidate document, making the inference process complicated and slow. In this paper, we aim to simplify the multi-vector retrieval by rethinking the role of token retrieval. We present XTR, ConteXtualized Token Retriever, which introduces a simple, yet novel, objective function that encourages the model to retrieve the most important document tokens first. The improvement to token retrieval allows XTR to rank candidates only using the retrieved tokens rather than all tokens in the document, and enables a newly designed scoring stage that is two-to-three orders of magnitude cheaper than that of ColBERT. On the popular BEIR benchmark, XTR advances the state-of-the-art by 2.8 nDCG@10 without any distillation. Detailed analysis confirms our decision to revisit the token retrieval stage, as XTR demonstrates much better recall of the token retrieval stage compared to ColBERT. | Rethinking the Role of Token Retrieval in Multi-Vector Retrieval | [
"Jinhyuk Lee",
"Zhuyun Dai",
"Sai Meher Karthik Duddu",
"Tao Lei",
"Iftekhar Naim",
"Ming-Wei Chang",
"Vincent Y Zhao"
] | Conference | poster | 2304.01982 | [
"https://github.com/google-deepmind/xtr"
] | https://huggingface.co/papers/2304.01982 | 1 | 0 | 0 | 7 | 1 | [
"google/xtr-base-multilingual",
"google/xtr-base-en"
] | [] | [] |
null | https://openreview.net/forum?id=ZQMlfNijY5 | @inproceedings{
xu2023normalizing,
title={Normalizing flow neural networks by {JKO} scheme},
author={Chen Xu and Xiuyuan Cheng and Yao Xie},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZQMlfNijY5}
} | Normalizing flow is a class of deep generative models for efficient sampling and likelihood estimation, which achieves attractive performance, particularly in high dimensions. The flow is often implemented using a sequence of invertible residual blocks. Existing works adopt special network architectures and regularization of flow trajectories. In this paper, we develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto (JKO) scheme, which unfolds the discrete-time dynamic of the Wasserstein gradient flow. The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks, avoiding sampling SDE trajectories and score matching or variational learning, thus reducing the memory load and difficulty in end-to-end training. We also develop adaptive time reparameterization of the flow network with a progressive refinement of the induced trajectory in probability space to improve the model accuracy further. Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance compared with existing flow and diffusion models at a significantly reduced computational and memory cost. | Normalizing flow neural networks by JKO scheme | [
"Chen Xu",
"Xiuyuan Cheng",
"Yao Xie"
] | Conference | spotlight | 2212.14424 | [
"https://github.com/hamrel-cxu/jko-iflow"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZPtzwr2SwJ | @inproceedings{
zhao2023learning,
title={Learning Adversarial Low-rank Markov Decision Processes with Unknown Transition and Full-information Feedback},
author={Canzhe Zhao and Ruofeng Yang and Baoxiang Wang and Xuezhou Zhang and Shuai Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZPtzwr2SwJ}
} | In this work, we study the low-rank MDPs with adversarially changed losses in the full-information feedback setting. In particular, the unknown transition probability kernel admits a low-rank matrix decomposition \citep{REPUCB22}, and the loss functions may change adversarially but are revealed to the learner at the end of each episode. We propose a policy optimization-based algorithm POLO, and we prove that it attains the $\widetilde{O}(K^{\frac{5}{6}}A^{\frac{1}{2}}d\ln(1+M)/(1-\gamma)^2)$ regret guarantee, where $d$ is rank of the transition kernel (and hence the dimension of the unknown representations), $A$ is the cardinality of the action space, $M$ is the cardinality of the model class that contains all the plausible representations, and $\gamma$ is the discounted factor. Notably, our algorithm is oracle-efficient and has a regret guarantee with no dependence on the size of potentially arbitrarily large state space. Furthermore, we also prove an $\Omega(\frac{\gamma^2}{1-\gamma} \sqrt{d A K})$ regret lower bound for this problem, showing that low-rank MDPs are statistically more difficult to learn than linear MDPs in the regret minimization setting. To the best of our knowledge, we present the first algorithm that interleaves representation learning, exploration, and exploitation to achieve the sublinear regret guarantee for RL with nonlinear function approximation and adversarial losses. | Learning Adversarial Low-rank Markov Decision Processes with Unknown Transition and Full-information Feedback | [
"Canzhe Zhao",
"Ruofeng Yang",
"Baoxiang Wang",
"Xuezhou Zhang",
"Shuai Li"
] | Conference | poster | 2311.07876 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZPj7ey5fXa | @inproceedings{
turki2023pynerf,
title={PyNe{RF}: Pyramidal Neural Radiance Fields},
author={Haithem Turki and Michael Zollh{\"o}fer and Christian Richardt and Deva Ramanan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZPj7ey5fXa}
} | Neural Radiance Fields (NeRFs) can be dramatically accelerated by spatial grid representations. However, they do not explicitly reason about scale and so introduce aliasing artifacts when reconstructing scenes captured at different camera distances. Mip-NeRF and its extensions propose scale-aware renderers that project volumetric frustums rather than point samples. But such approaches rely on positional encodings that are not readily compatible with grid methods. We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions. At render time, we simply use coarser grids to render samples that cover larger volumes. Our method can be easily applied to existing accelerated NeRF methods and significantly improves rendering quality (reducing error rates by 20–90% across synthetic and unbounded real-world scenes) while incurring minimal performance overhead (as each model head is quick to evaluate). Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster. | PyNeRF: Pyramidal Neural Radiance Fields | [
"Haithem Turki",
"Michael Zollhöfer",
"Christian Richardt",
"Deva Ramanan"
] | Conference | poster | 2312.00252 | [
"https://github.com/hturki/pynerf"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZOKhtz2Z9X | @inproceedings{
yu2023encoding,
title={Encoding Human Behavior in Information Design through Deep Learning},
author={Guanghui Yu and Wei Tang and Saumik Narayanan and Chien-Ju Ho},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZOKhtz2Z9X}
} | We initiate the study of $\textit{behavioral information design}$ through deep learning. In information design, a $\textit{sender}$ aims to persuade a $\textit{receiver}$ to take certain actions by strategically revealing information. We address scenarios in which the receiver might exhibit different behavior patterns other than the standard Bayesian rational assumption. We propose HAIDNet, a neural-network-based optimization framework for information design that can adapt to multiple representations of human behavior. Through extensive simulation, we show that HAIDNet can not only recover information policies that are near-optimal compared with known analytical solutions, but also can extend to designing information policies for settings that are computationally challenging (e.g., when there are multiple receivers) or for settings where there are no known solutions in general (e.g., when the receiver behavior does not follow the Bayesian rational assumption). We also conduct real-world human-subject experiments and demonstrate that our framework can capture human behavior from data and lead to more effective information policy for real-world human receivers. | Encoding Human Behavior in Information Design through Deep Learning | [
"Guanghui Yu",
"Wei Tang",
"Saumik Narayanan",
"Chien-Ju Ho"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZNBblMEP16 | @inproceedings{
choi2023depthdiscriminative,
title={Depth-discriminative Metric Learning for Monocular 3D Object Detection},
author={Wonhyeok Choi and Mingyu Shin and Sunghoon Im},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZNBblMEP16}
} | Monocular 3D object detection poses a significant challenge due to the lack of depth information in RGB images. Many existing methods strive to enhance the object depth estimation performance by allocating additional parameters for object depth estimation, utilizing extra modules or data. In contrast, we introduce a novel metric learning scheme that encourages the model to extract depth-discriminative features regardless of the visual attributes without increasing inference time and model size. Our method employs the distance-preserving function to organize the feature space manifold in relation to ground-truth object depth. The proposed $(K,B,\epsilon)$-quasi-isometric loss leverages predetermined pairwise distance restriction as guidance for adjusting the distance among object descriptors without disrupting the non-linearity of the natural feature manifold. Moreover, we introduce an auxiliary head for object-wise depth estimation, which enhances depth quality while maintaining the inference time. The broad applicability of our method is demonstrated through experiments that show improvements in overall performance when integrated into various baselines. The results show that our method consistently improves the performance of various baselines by 23.51\% and 5.78\% on average across KITTI and Waymo, respectively. | Depth-discriminative Metric Learning for Monocular 3D Object Detection | [
"Wonhyeok Choi",
"Mingyu Shin",
"Sunghoon Im"
] | Conference | poster | 2401.01075 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZKVxABGJ6r | @inproceedings{
chen2023panogrf,
title={Pano{GRF}: Generalizable Spherical Radiance Fields for Wide-baseline Panoramas},
author={Zheng Chen and Yan-Pei Cao and Yuan-Chen Guo and Chen Wang and Ying Shan and Song-Hai Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZKVxABGJ6r}
} | Achieving an immersive experience enabling users to explore virtual environments with six degrees of freedom (6DoF) is essential for various applications such as virtual reality (VR). Wide-baseline panoramas are commonly used in these applications to reduce network bandwidth and storage requirements. However, synthesizing novel views from these panoramas remains a key challenge. Although existing neural radiance field methods can produce photorealistic views under narrow-baseline and dense image captures, they tend to overfit the training views when dealing with wide-baseline panoramas due to the difficulty in learning accurate geometry from sparse $360^{\circ}$ views. To address this problem, we propose PanoGRF, Generalizable Spherical Radiance Fields for Wide-baseline Panoramas, which construct spherical radiance fields incorporating $360^{\circ}$ scene priors. Unlike generalizable radiance fields trained on perspective images, PanoGRF avoids the information loss from panorama-to-perspective conversion and directly aggregates geometry and appearance features of 3D sample points from each panoramic view based on spherical projection. Moreover, as some regions of the panorama are only visible from one view while invisible from others under wide baseline settings, PanoGRF incorporates $360^{\circ}$ monocular depth priors into spherical depth estimation to improve the geometry features. Experimental results on multiple panoramic datasets demonstrate that PanoGRF significantly outperforms state-of-the-art generalizable view synthesis methods for wide-baseline panoramas (e.g., OmniSyn) and perspective images (e.g., IBRNet, NeuRay). | PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline Panoramas | [
"Zheng Chen",
"Yan-Pei Cao",
"Yuan-Chen Guo",
"Chen Wang",
"Ying Shan",
"Song-Hai Zhang"
] | Conference | poster | 2306.01531 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZIyAHaLlsn | @inproceedings{
yue2023resshift,
title={ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting},
author={Zongsheng Yue and Jianyi Wang and Chen Change Loy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZIyAHaLlsn}
} | Diffusion-based image super-resolution (SR) methods are mainly limited by the low inference speed due to the requirements of hundreds or even thousands of sampling steps. Existing acceleration sampling techniques inevitably sacrifice performance to some extent, leading to over-blurry SR results. To address this issue, we propose a novel and efficient diffusion model for SR that significantly reduces the number of diffusion steps, thereby eliminating the need for post-acceleration during inference and its associated performance deterioration. Our method constructs a Markov chain that transfers between the high-resolution image and the low-resolution image by shifting the residual between them, substantially improving the transition efficiency. Additionally, an elaborate noise schedule is developed to flexibly control the shifting speed and the noise strength during the diffusion process. Extensive experiments demonstrate that the proposed method obtains superior or at least comparable performance to current state-of-the-art methods on both synthetic and real-world datasets, \textit{\textbf{even only with 20 sampling steps}}. Our code and model will be made publicly. | ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting | [
"Zongsheng Yue",
"Jianyi Wang",
"Chen Change Loy"
] | Conference | spotlight | 2307.12348 | [
"https://github.com/zsyoaoa/resshift"
] | https://huggingface.co/papers/2307.12348 | 1 | 0 | 0 | 3 | 1 | [] | [] | [
"yuhj95/resshift"
] |
null | https://openreview.net/forum?id=ZIfhYAE2xg | @inproceedings{
wei2023sparse,
title={Sparse Parameterization for Epitomic Dataset Distillation},
author={Xing Wei and Anjia Cao and Funing Yang and Zhiheng Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZIfhYAE2xg}
} | The success of deep learning relies heavily on large and diverse datasets, but the storage, preprocessing, and training of such data present significant challenges. To address these challenges, dataset distillation techniques have been proposed to obtain smaller synthetic datasets that capture the essential information of the originals. In this paper, we introduce a Sparse Parameterization for Epitomic datasEt Distillation (SPEED) framework, which leverages the concept of dictionary learning and sparse coding to distill epitomes that represent pivotal information of the dataset. SPEED prioritizes proper parameterization of the synthetic dataset and introduces techniques to capture spatial redundancy within and between synthetic images. We propose Spatial-Agnostic Epitomic Tokens (SAETs) and Sparse Coding Matrices (SCMs) to efficiently represent and select significant features. Additionally, we build a Feature-Recurrent Network (FReeNet) to generate hierarchical features with high compression and storage efficiency. Experimental results demonstrate the superiority of SPEED in handling high-resolution datasets, achieving state-of-the-art performance on multiple benchmarks and downstream applications. Our framework is compatible with a variety of dataset matching approaches, generally enhancing their performance. This work highlights the importance of proper parameterization in epitomic dataset distillation and opens avenues for efficient representation learning. Source code is available at https://github.com/MIV-XJTU/SPEED. | Sparse Parameterization for Epitomic Dataset Distillation | [
"Xing Wei",
"Anjia Cao",
"Funing Yang",
"Zhiheng Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZITOHWeAy7 | @inproceedings{
sun2023a,
title={A Graph-Theoretic Framework for Understanding Open-World Semi-Supervised Learning},
author={Yiyou Sun and Zhenmei Shi and Yixuan Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZITOHWeAy7}
} | Open-world semi-supervised learning aims at inferring both known and novel classes in unlabeled data, by harnessing prior knowledge from a labeled set with known classes. Despite its importance, there is a lack of theoretical foundations for this problem. This paper bridges the gap by formalizing a graph-theoretic framework tailored for the open-world setting, where the clustering can be theoretically characterized by graph factorization. Our graph-theoretic framework illuminates practical algorithms and provides guarantees. In particular, based on our graph formulation, we apply the algorithm called Spectral Open-world Representation Learning (SORL), and show that minimizing our loss is equivalent to performing spectral decomposition on the graph. Such equivalence allows us to derive a provable error bound on the clustering performance for both known and novel classes, and analyze rigorously when labeled data helps. Empirically, SORL can match or outperform several strong baselines on common benchmark datasets, which is appealing for practical usage while enjoying theoretical guarantees. | A Graph-Theoretic Framework for Understanding Open-World Semi-Supervised Learning | [
"Yiyou Sun",
"Zhenmei Shi",
"Yixuan Li"
] | Conference | spotlight | 2311.03524 | [
"https://github.com/deeplearning-wisc/sorl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZGElmTRk3w | @inproceedings{
fiedler2023on,
title={On kernel-based statistical learning theory in the mean field limit},
author={Christian Fiedler and Michael Herty and Sebastian Trimpe},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZGElmTRk3w}
} | In many applications of machine learning, a large number of variables are considered. Motivated by machine learning of interacting particle systems, we consider the situation when the number of input variables goes to infinity. First, we continue the recent investigation of the mean field limit of kernels and their reproducing kernel Hilbert spaces, completing the existing theory. Next, we provide results relevant for approximation with such kernels in the mean field limit, including a representer theorem. Finally, we use these kernels in the context of statistical learning in the mean field limit, focusing on Support Vector Machines. In particular, we show mean field convergence of empirical and infinite-sample solutions as well as the convergence of the corresponding risks. On the one hand, our results establish rigorous mean field limits in the context of kernel methods, providing new theoretical tools and insights for large-scale problems. On the other hand, our setting corresponds to a new form of limit of learning problems, which seems to have not been investigated yet in the statistical learning theory literature. | On kernel-based statistical learning theory in the mean field limit | [
"Christian Fiedler",
"Michael Herty",
"Sebastian Trimpe"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ZFwNdsDCRL | @inproceedings{
lanchantin2023learning,
title={Learning to Reason and Memorize with Self-Notes},
author={Jack Lanchantin and Shubham Toshniwal and Jason E Weston and Arthur Szlam and Sainbayar Sukhbaatar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZFwNdsDCRL}
} | Large language models have been shown to struggle with multi-step reasoning, and do not retain previous reasoning steps for future use. We propose a simple method for solving both of these problems by allowing the model to take Self-Notes. Unlike recent chain-of-thought or scratchpad approaches, the model can deviate from the input context at any time to explicitly think and write down its thoughts. This allows the model to perform reasoning on the fly as it reads the context and even integrate previous reasoning steps, thus enhancing its memory with useful information and enabling multi-step reasoning. Experiments across a wide variety of tasks demonstrate that our method can outperform chain-of-thought and scratchpad methods by taking Self-Notes that interleave the input text. | Learning to Reason and Memorize with Self-Notes | [
"Jack Lanchantin",
"Shubham Toshniwal",
"Jason E Weston",
"Arthur Szlam",
"Sainbayar Sukhbaatar"
] | Conference | poster | 2305.00833 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZED5wdGous | @inproceedings{
granley2023humanintheloop,
title={Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual Prostheses},
author={Jacob Granley and Tristan Fauvel and Matthew Chalk and Michael Beyeler},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZED5wdGous}
} | Neuroprostheses show potential in restoring lost sensory function and enhancing human capabilities, but the sensations produced by current devices often seem unnatural or distorted. Exact placement of implants and differences in individual perception lead to significant variations in stimulus response, making personalized stimulus optimization a key challenge. Bayesian optimization could be used
to optimize patient-specific stimulation parameters with limited noisy observations, but is not feasible for high-dimensional stimuli. Alternatively, deep learning models can optimize stimulus encoding strategies, but typically assume perfect knowledge of patient-specific variations. Here we propose a novel, practically feasible approach that overcomes both of these fundamental limitations. First, a deep encoder network is trained to produce optimal stimuli for any individual patient by inverting a forward model mapping electrical stimuli to visual percepts. Second, a preferential Bayesian optimization strategy utilizes this encoder to learn the optimal patient-specific parameters for a new patient, using a minimal number of pairwise comparisons between candidate stimuli. We demonstrate the viability of this approach on a novel, state-of-the-art visual prosthesis model. Our approach quickly learns a personalized stimulus encoder and leads to dramatic improvements in the quality of restored vision, outperforming existing encoding strategies. Further, this approach is robust to noisy patient feedback and misspecifications in the underlying forward model. Overall, our results suggest that combining the strengths of deep learning and Bayesian optimization could significantly improve the perceptual experience of patients fitted with visual prostheses and may prove a viable solution for a range of neuroprosthetic technologies | Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual Prostheses | [
"Jacob Granley",
"Tristan Fauvel",
"Matthew Chalk",
"Michael Beyeler"
] | Conference | poster | 2306.13104 | [
"https://github.com/bionicvisionlab/2023-neurips-hilo"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZD65F3x1jU | @inproceedings{
wang2023on,
title={On Learning Latent Models with Multi-Instance Weak Supervision},
author={Kaifu Wang and Efthymia Tsamoura and Dan Roth},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZD65F3x1jU}
} | We consider a weakly supervised learning scenario where the supervision signal is generated by a transition function $\sigma$ of labels associated with multiple input instances. We formulate this problem as *multi-instance Partial Label Learning (multi-instance PLL)*, which is an extension to the standard PLL problem. Our problem is met in different fields, including latent structural learning and neuro-symbolic integration. Despite the existence of many learning techniques, limited theoretical analysis has been dedicated to this problem. In this paper, we provide the first theoretical study of multi-instance PLL with possibly an unknown transition $\sigma$. Our main contributions are as follows: First, we proposed a necessary and sufficient condition for the learnability of the problem. This condition nontrivially generalizes and relaxes the existing *small ambiguity degree* in PLL literature since we allow the transition to be deterministic. Second, we derived Rademacher-style error bounds based on the top-$k$ surrogate loss that is widely used in the neuro-symbolic literature. Furthermore, we conclude with empirical experiments for learning with an unknown transition. The empirical results align with our theoretical findings; however, they also expose the issue of scalability in the weak supervision literature. | On Learning Latent Models with Multi-Instance Weak Supervision | [
"Kaifu Wang",
"Efthymia Tsamoura",
"Dan Roth"
] | Conference | poster | 2306.13796 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZBzYWP2Gpl | @inproceedings{
rege2023adanns,
title={Ad{ANNS}: A Framework for Adaptive Semantic Search},
author={Aniket Rege and Aditya Kusupati and Sharan Ranjit S and Alan Fan and Qingqing Cao and Sham M. Kakade and Prateek Jain and Ali Farhadi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZBzYWP2Gpl}
} | Web-scale search systems learn an encoder to embed a given query which is then hooked into an approximate nearest neighbor search (ANNS) pipeline to retrieve similar data points. To accurately capture tail queries and data points, learned representations typically are _rigid, high-dimensional_ vectors that are generally used as-is in the entire ANNS pipeline and can lead to computationally expensive retrieval. In this paper, we argue that instead of rigid representations, different stages of ANNS can leverage _adaptive representations_ of varying capacities to achieve significantly better accuracy-compute trade-offs, i.e., stages of ANNS that can get away with more approximate computation should use a lower-capacity representation of the same data point. To this end, we introduce AdANNS, a novel ANNS design framework that explicitly leverages the flexibility of Matryoshka Representations. We demonstrate state-of-the-art accuracy-compute trade-offs using novel AdANNS-based key ANNS building blocks like search data structures (AdANNS-IVF) and quantization (AdANNS-OPQ). For example on ImageNet retrieval, AdANNS-IVF is up to $\mathbf{1.5}$% more accurate than the rigid representations-based IVF at the same compute budget; and matches accuracy while being up to $\mathbf{90}\times$ faster in _wall-clock time_. For Natural Questions, $32$-byte AdANNS-OPQ matches the accuracy of the $64$-byte OPQ baseline constructed using rigid representations -- _same accuracy at half the cost!_ We further show that the gains from AdANNS translate to modern-day composite ANNS indices that combine search structures and quantization. Finally, we demonstrate that AdANNS can enable inference-time adaptivity for compute-aware search on ANNS indices built non-adaptively on matryoshka representations. Code is open-sourced at https://github.com/RAIVNLab/AdANNS. | AdANNS: A Framework for Adaptive Semantic Search | [
"Aniket Rege",
"Aditya Kusupati",
"Sharan Ranjit S",
"Alan Fan",
"Qingqing Cao",
"Sham M. Kakade",
"Prateek Jain",
"Ali Farhadi"
] | Conference | poster | 2305.19435 | [
"https://github.com/raivnlab/adanns"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZBxycYCuEL | @inproceedings{
xue2023stability,
title={Stability Guarantees for Feature Attributions with Multiplicative Smoothing},
author={Anton Xue and Rajeev Alur and Eric Wong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZBxycYCuEL}
} | Explanation methods for machine learning models tend not to provide any formal guarantees and may not reflect the underlying decision-making process.
In this work, we analyze stability as a property for reliable feature attribution methods.
We prove that relaxed variants of stability are guaranteed if the model is sufficiently Lipschitz with respect to the masking of features.
We develop a smoothing method called Multiplicative Smoothing (MuS) to achieve such a model.
We show that MuS overcomes the theoretical limitations of standard smoothing techniques and can be integrated with any classifier and feature attribution method.
We evaluate MuS on vision and language models with various feature attribution methods, such as LIME and SHAP, and demonstrate that MuS endows feature attributions with non-trivial stability guarantees. | Stability Guarantees for Feature Attributions with Multiplicative Smoothing | [
"Anton Xue",
"Rajeev Alur",
"Eric Wong"
] | Conference | poster | 2307.05902 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZBB8EFO7ma | @inproceedings{
liu2023aiming,
title={Aiming towards the minimizers: fast convergence of {SGD} for overparametrized problems},
author={Chaoyue Liu and Dmitriy Drusvyatskiy and Misha Belkin and Damek Davis and Yian Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZBB8EFO7ma}
} | Modern machine learning paradigms, such as deep learning, occur in or close to the interpolation regime, wherein the number of model parameters is much larger than the number of data samples.
In this work, we propose a regularity condition within the interpolation regime which endows the stochastic gradient method with the same worst-case iteration complexity as the deterministic gradient method, while using only a single sampled gradient (or a minibatch) in each iteration. In contrast, all existing guarantees require the stochastic gradient method to take small steps, thereby resulting in a much slower linear rate of convergence. Finally, we demonstrate that our condition holds when training sufficiently wide feedforward neural networks with a linear output layer. | Aiming towards the minimizers: fast convergence of SGD for overparametrized problems | [
"Chaoyue Liu",
"Dmitriy Drusvyatskiy",
"Misha Belkin",
"Damek Davis",
"Yian Ma"
] | Conference | poster | 2306.02601 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ZARAiV25CW | @inproceedings{
gao2023generalized,
title={Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation},
author={Richard Gao and Michael Deistler and Jakob H. Macke},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ZARAiV25CW}
} | Simulation-based inference (SBI) enables amortized Bayesian inference for simulators with implicit likelihoods. But when we are primarily interested in the quality of predictive simulations, or when the model cannot exactly reproduce the observed data (i.e., is misspecified), targeting the Bayesian posterior may be overly restrictive. Generalized Bayesian Inference (GBI) aims to robustify inference for (misspecified) simulator models, replacing the likelihood-function with a cost function that evaluates the goodness of parameters relative to data. However, GBI methods generally require running multiple simulations to estimate the cost function at each parameter value during inference, making the approach computationally infeasible for even moderately complex simulators. Here, we propose amortized cost estimation (ACE) for GBI to address this challenge: We train a neural network to approximate the cost function, which we define as the expected distance between simulations produced by a parameter and observed data. The trained network can then be used with MCMC to infer GBI posteriors for any observation without running additional simulations. We show that, on several benchmark tasks, ACE accurately predicts cost and provides predictive simulations that are closer to synthetic observations than other SBI methods, especially for misspecified simulators. Finally, we apply ACE to infer parameters of the Hodgkin-Huxley model given real intracellular recordings from the Allen Cell Types Database. ACE identifies better data-matching parameters while being an order of magnitude more simulation-efficient than a standard SBI method. In summary, ACE combines the strengths of SBI methods and GBI to perform robust and simulation-amortized inference for scientific simulators. | Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation | [
"Richard Gao",
"Michael Deistler",
"Jakob H. Macke"
] | Conference | poster | 2305.15208 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Z8TjsPFBSx | @inproceedings{
li2023characterizing,
title={Characterizing the Impacts of Semi-supervised Learning for Weak Supervision},
author={Jeffrey Li and Jieyu Zhang and Ludwig Schmidt and Alexander Ratner},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z8TjsPFBSx}
} | Labeling training data is a critical and expensive step in producing high accuracy ML models, whether training from scratch or fine-tuning.
To make labeling more efficient, two major approaches are programmatic weak supervision (WS) and semi-supervised learning (SSL). More recent works have either explicitly or implicitly used techniques at their intersection, but in various complex and ad hoc ways. In this work, we define a simple, modular design space to study the use of SSL techniques for WS more systematically. Surprisingly, we find that fairly simple methods from our design space match the performance of more complex state-of-the-art methods, averaging a 3 p.p. increase in accuracy/F1-score across 8 standard WS benchmarks. Further, we provide practical guidance on when different components are worth their added complexity and training costs. Contrary to current understanding, we find using SSL is not necessary to obtain the best performance on most WS benchmarks but is more effective when: (1) end models are smaller, and (2) WS provides labels for only a small portion of training examples. | Characterizing the Impacts of Semi-supervised Learning for Weak Supervision | [
"Jeffrey Li",
"Jieyu Zhang",
"Ludwig Schmidt",
"Alexander Ratner"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Z7Cz9un2Fy | @inproceedings{
ham2023neokd,
title={{NEO}-{KD}: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks},
author={Seokil Ham and Jungwuk Park and Dong-Jun Han and Jaekyun Moon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z7Cz9un2Fy}
} | While multi-exit neural networks are regarded as a promising solution for making efficient inference via early exits, combating adversarial attacks remains a challenging problem. In multi-exit networks, due to the high dependency among different submodels, an adversarial example targeting a specific exit not only degrades the performance of the target exit but also reduces the performance of all other exits concurrently. This makes multi-exit networks highly vulnerable to simple adversarial attacks. In this paper, we propose NEO-KD, a knowledge-distillation-based adversarial training strategy that tackles this fundamental challenge based on two key contributions. NEO-KD first resorts to neighbor knowledge distillation to guide the output of the adversarial examples to tend to the ensemble outputs of neighbor exits of clean data. NEO-KD also employs exit-wise orthogonal knowledge distillation for reducing adversarial transferability across different submodels. The result is a significantly improved robustness against adversarial attacks. Experimental results on various datasets/models show that our method achieves the best adversarial accuracy with reduced computation budgets, compared to the baselines relying on existing adversarial training or knowledge distillation techniques for multi-exit networks. | NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks | [
"Seokil Ham",
"Jungwuk Park",
"Dong-Jun Han",
"Jaekyun Moon"
] | Conference | poster | 2311.00428 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Z764QxwETf | @inproceedings{
hosseini2023puzzlefusion,
title={Puzzlefusion: Unleashing the Power of Diffusion Models for Spatial Puzzle Solving},
author={Sepidehsadat Hosseini and Mohammad Amin Shabani and Saghar Irandoust and Yasutaka Furukawa},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z764QxwETf}
} | This paper presents an end-to-end neural architecture based on Diffusion Models for spatial puzzle solving, particularly jigsaw puzzle and room arrangement tasks.
In the latter task, for instance, the proposed system ``PuzzleFusion'' takes a set of room layouts as polygonal curves in the top-down view and aligns the room layout pieces by estimating their 2D translations and rotations, akin to solving the jigsaw puzzle of room layouts. A surprising discovery of the paper is that the simple use of a Diffusion Model effectively solves these challenging spatial puzzle tasks as a conditional generation process.
To enable learning of an end-to-end neural system, the paper introduces new datasets with ground-truth arrangements: 1) 2D Voronoi Jigsaw Dataset, a synthetic one where pieces are generated by voronoi diagram of 2D pointset; and 2) MagicPlan Dataset, a real one from a production pipeline by MagicPlan, where pieces are room layouts constructed by augmented reality App by real-estate consumers.
The qualitative and quantitative evaluations demonstrate that the proposed approach outperforms the competing methods by significant margins in all three spatial puzzle tasks. We have provided code and data in https://sepidsh.github.io/puzzlefusion. | Puzzlefusion: Unleashing the Power of Diffusion Models for Spatial Puzzle Solving | [
"Sepidehsadat Hosseini",
"Mohammad Amin Shabani",
"Saghar Irandoust",
"Yasutaka Furukawa"
] | Conference | spotlight | 2211.13785 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Z6eexoCy7W | @inproceedings{
gupta2023topologyaware,
title={Topology-Aware Uncertainty for Image Segmentation},
author={Saumya Gupta and Yikai Zhang and Xiaoling Hu and Prateek Prasanna and Chao Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z6eexoCy7W}
} | Segmentation of curvilinear structures such as vasculature and road networks is challenging due to relatively weak signals and complex geometry/topology. To facilitate and accelerate large scale annotation, one has to adopt semi-automatic approaches such as proofreading by experts. In this work, we focus on uncertainty estimation for such tasks, so that highly uncertain, and thus error-prone structures can be identified for human annotators to verify. Unlike most existing works, which provide pixel-wise uncertainty maps, we stipulate it is crucial to estimate uncertainty in the units of topological structures, e.g., small pieces of connections and branches. To achieve this, we leverage tools from topological data analysis, specifically discrete Morse theory (DMT), to first capture the structures, and then reason about their uncertainties. To model the uncertainty, we (1) propose a joint prediction model that estimates the uncertainty of a structure while taking the neighboring structures into consideration (inter-structural uncertainty); (2) propose a novel Probabilistic DMT to model the inherent uncertainty within each structure (intra-structural uncertainty) by sampling its representations via a perturb-and-walk scheme. On various 2D and 3D datasets, our method produces better structure-wise uncertainty maps compared to existing works. Code available at: https://github.com/Saumya-Gupta-26/struct-uncertainty | Topology-Aware Uncertainty for Image Segmentation | [
"Saumya Gupta",
"Yikai Zhang",
"Xiaoling Hu",
"Prateek Prasanna",
"Chao Chen"
] | Conference | poster | 2306.05671 | [
"https://github.com/Saumya-Gupta-26/struct-uncertainty"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Z57JrmubNl | @inproceedings{
wen2023treerings,
title={Tree-Rings Watermarks: Invisible Fingerprints for Diffusion Images},
author={Yuxin Wen and John Kirchenbauer and Jonas Geiping and Tom Goldstein},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z57JrmubNl}
} | Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content. In this paper, we introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs. Unlike existing methods that perform post-hoc modifications to images after sampling, Tree-Ring Watermarking subtly influences the entire sampling process, resulting in a model fingerprint that is invisible to humans. The watermark embeds a pattern into the initial noise vector used for sampling. These patterns are structured in Fourier space so that they are invariant to convolutions, crops, dilations, flips, and rotations. After image generation, the watermark signal is detected by inverting the diffusion process to retrieve the noise vector, which is then checked for the embedded signal. We demonstrate that this technique can be easily applied to arbitrary diffusion models, including text-conditioned Stable Diffusion, as a plug-in with negligible loss in FID. Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed. | Tree-Rings Watermarks: Invisible Fingerprints for Diffusion Images | [
"Yuxin Wen",
"John Kirchenbauer",
"Jonas Geiping",
"Tom Goldstein"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Z2he2Y0MoH | @inproceedings{
gao2023wide,
title={Wide Neural Networks as Gaussian Processes: Lessons from Deep Equilibrium Models},
author={Tianxiang Gao and Xiaokai Huo and Hailiang Liu and Hongyang Gao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z2he2Y0MoH}
} | Neural networks with wide layers have attracted significant attention due to their equivalence to Gaussian processes, enabling perfect fitting of training data while maintaining generalization performance, known as benign overfitting. However, existing results mainly focus on shallow or finite-depth networks, necessitating a comprehensive analysis of wide neural networks with infinite-depth layers, such as neural ordinary differential equations (ODEs) and deep equilibrium models (DEQs).
In this paper, we specifically investigate the deep equilibrium model (DEQ), an infinite-depth neural network with shared weight matrices across layers. Our analysis reveals that as the width of DEQ layers approaches infinity, it converges to a Gaussian process, establishing what is known as the Neural Network and Gaussian Process (NNGP) correspondence. Remarkably, this convergence holds even when the limits of depth and width are interchanged, which is not observed in typical infinite-depth Multilayer Perceptron (MLP) networks. Furthermore, we demonstrate that the associated Gaussian vector remains non-degenerate for any pairwise distinct input data, ensuring a strictly positive smallest eigenvalue of the corresponding kernel matrix using the NNGP kernel. These findings serve as fundamental elements for studying the training and generalization of DEQs, laying the groundwork for future research in this area. | Wide Neural Networks as Gaussian Processes: Lessons from Deep Equilibrium Models | [
"Tianxiang Gao",
"Xiaokai Huo",
"Hailiang Liu",
"Hongyang Gao"
] | Conference | poster | 2310.10767 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=Z2L7F0nekb | @inproceedings{
qi2023metalearning,
title={Meta-Learning with Neural Bandit Scheduler},
author={Yunzhe Qi and Yikun Ban and Tianxin Wei and Jiaru Zou and Huaxiu Yao and Jingrui He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z2L7F0nekb}
} | Meta-learning has been proven an effective learning paradigm for training machine learning models with good generalization ability. Apart from the common practice of uniformly sampling the meta-training tasks, existing methods working on task scheduling strategies are mainly based on pre-defined sampling protocols or the assumed task-model correlations, and greedily make scheduling decisions, which can lead to sub-optimal performance bottlenecks of the meta-model. In this paper, we propose a novel task scheduling framework under Contextual Bandits settings, named BASS, which directly optimizes the task scheduling strategy based on the status of the meta-model. By balancing the exploitation and exploration in meta-learning task scheduling, BASS can help tackle the challenge of limited knowledge about the task distribution during the early stage of meta-training, while simultaneously exploring potential benefits for forthcoming meta-training iterations through an adaptive exploration strategy. Theoretical analysis and extensive experiments are presented to show the effectiveness of our proposed framework. | Meta-Learning with Neural Bandit Scheduler | [
"Yunzhe Qi",
"Yikun Ban",
"Tianxin Wei",
"Jiaru Zou",
"Huaxiu Yao",
"Jingrui He"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Z28nPtAVxx | @inproceedings{
yuan2023optimal,
title={Optimal Extragradient-Based Algorithms for Stochastic Variational Inequalities with Separable Structure},
author={Angela Yuan and Chris Junchi Li and Gauthier Gidel and Michael Jordan and Quanquan Gu and Simon Shaolei Du},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z28nPtAVxx}
} | We consider the problem of solving stochastic monotone variational inequalities with a separable structure using a stochastic first-order oracle. Building on standard extragradient for variational inequalities we propose a novel algorithm---stochastic \emph{accelerated gradient-extragradient} (AG-EG)---for strongly monotone variational inequalities (VIs). Our approach combines the strengths of extragradient and Nesterov acceleration. By showing that its iterates remain in a bounded domain and applying scheduled restarting, we prove that AG-EG has an optimal convergence rate for strongly monotone VIs. Furthermore, when specializing to the particular case of bilinearly coupled strongly-convex-strongly-concave saddle-point problems, including bilinear games, our algorithm achieves fine-grained convergence rates that match the respective lower bounds, with the stochasticity being characterized by an additive statistical error term that is optimal up to a constant prefactor. | Optimal Extragradient-Based Algorithms for Stochastic Variational Inequalities with Separable Structure | [
"Angela Yuan",
"Chris Junchi Li",
"Gauthier Gidel",
"Michael Jordan",
"Quanquan Gu",
"Simon Shaolei Du"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Z1W0u3Cr74 | @inproceedings{
ke2023revisiting,
title={Revisiting Logistic-softmax Likelihood in Bayesian Meta-Learning for Few-Shot Classification},
author={Tianjun Ke and Haoqun Cao and Zenan Ling and Feng Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=Z1W0u3Cr74}
} | Meta-learning has demonstrated promising results in few-shot classification (FSC) by learning to solve new problems using prior knowledge. Bayesian methods are effective at characterizing uncertainty in FSC, which is crucial in high-risk fields. In this context, the logistic-softmax likelihood is often employed as an alternative to the softmax likelihood in multi-class Gaussian process classification due to its conditional conjugacy property. However, the theoretical property of logistic-softmax is not clear and previous research indicated that the inherent uncertainty of logistic-softmax leads to suboptimal performance. To mitigate these issues, we revisit and redesign the logistic-softmax likelihood, which enables control of the \textit{a priori} confidence level through a temperature parameter. Furthermore, we theoretically and empirically show that softmax can be viewed as a special case of logistic-softmax and logistic-softmax induces a larger family of data distribution than softmax. Utilizing modified logistic-softmax, we integrate the data augmentation technique into the deep kernel based Gaussian process meta-learning framework, and derive an analytical mean-field approximation for task-specific updates. Our approach yields well-calibrated uncertainty estimates and achieves comparable or superior results on standard benchmark datasets. Code is publicly available at \url{https://github.com/keanson/revisit-logistic-softmax}. | Revisiting Logistic-softmax Likelihood in Bayesian Meta-Learning for Few-Shot Classification | [
"Tianjun Ke",
"Haoqun Cao",
"Zenan Ling",
"Feng Zhou"
] | Conference | poster | 2310.10379 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |