bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
11
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=gAQCx61chN | @inproceedings{
biktairov2023sol,
title={{SOL}: Sampling-based Optimal Linear bounding of arbitrary scalar functions},
author={Yuriy Biktairov and Jyotirmoy Deshmukh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gAQCx61chN}
} | Finding tight linear bounds for activation functions in neural networks
is an essential part of several state of the art neural network robustness
certification tools. An activation function is an arbitrary, nonlinear,
scalar function $f: \mathbb{R}^d \rightarrow \mathbb{R}$. In the existing work
on robustness certification, such bounds have been computed using human
ingenuity for a handful of the most popular activation functions. While a
number of heuristics have been proposed for bounding arbitrary functions,
no analysis of the tightness optimality for general scalar functions has been
offered yet, to the best of our knowledge. We fill this gap by formulating a concise
optimality criterion for tightness of the approximation which allows us to
build optimal bounds for any function convex in the region of interest $R$. For
a more general class of functions Lipshitz-continuous in $R$ we propose a
sampling-based approach (SOL) which, given an instance of the bounding problem,
efficiently computes the tightest linear bounds within a given $\varepsilon > 0$
threshold. We leverage an adaptive sampling technique to iteratively build a set
of sample points suitable for representing the target activation function. While
the theoretical worst case time complexity of our approach is
$O(\varepsilon^{-2d})$,
it typically only takes $O(\log^{\beta} \frac{1}{\varepsilon})$ time for some $\beta \ge 1$ and is
thus
sufficiently fast in practice. We provide empirical evidence of SOL's practicality
by incorporating it into a robustness certifier and observing that it
produces similar or higher certification rates while taking as low as quarter of the time compared to the other methods. | SOL: Sampling-based Optimal Linear bounding of arbitrary scalar functions | [
"Yuriy Biktairov",
"Jyotirmoy Deshmukh"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=gAP52Z2dar | @inproceedings{
hejna2023inverse,
title={Inverse Preference Learning: Preference-based {RL} without a Reward Function},
author={Joey Hejna and Dorsa Sadigh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=gAP52Z2dar}
} | Reward functions are difficult to design and often hard to align with human intent. Preference-based Reinforcement Learning (RL) algorithms address these problems by learning reward functions from human feedback. However, the majority of preference-based RL methods na\"ively combine supervised reward models with off-the-shelf RL algorithms. Contemporary approaches have sought to improve performance and query complexity by using larger and more complex reward architectures such as transformers. Instead of using highly complex architectures, we develop a new and parameter-efficient algorithm, Inverse Preference Learning (IPL), specifically designed for learning from offline preference data. Our key insight is that for a fixed policy, the $Q$-function encodes all information about the reward function, effectively making them interchangeable. Using this insight, we completely eliminate the need for a learned reward function. Our resulting algorithm is simpler and more parameter-efficient. Across a suite of continuous control and robotics benchmarks, IPL attains competitive performance compared to more complex approaches that leverage transformer-based and non-Markovian reward functions while having fewer algorithmic hyperparameters and learned network parameters. Our code is publicly released. | Inverse Preference Learning: Preference-based RL without a Reward Function | [
"Joey Hejna",
"Dorsa Sadigh"
] | Conference | poster | [
""
] | https://huggingface.co/papers/2305.15363 | 0 | 0 | 0 | 2 | 1 | [] | [] | [] |
|
null | https://openreview.net/forum?id=g9gjpFOiO4 | @inproceedings{
you2023rethinking,
title={Rethinking Semi-Supervised Medical Image Segmentation: A Variance-Reduction Perspective},
author={Chenyu You and Weicheng Dai and Yifei Min and Fenglin Liu and David A. Clifton and S Kevin Zhou and Lawrence Hamilton Staib and James s Duncan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=g9gjpFOiO4}
} | For medical image segmentation, contrastive learning is the dominant practice to improve the quality of visual representations by contrasting semantically similar and dissimilar pairs of samples. This is enabled by the observation that without accessing ground truth labels, negative examples with truly dissimilar anatomical features, if sampled, can significantly improve the performance. In reality, however, these samples may come from similar anatomical features and the models may struggle to distinguish the minority tail-class samples, making the tail classes more prone to misclassification, both of which typically lead to model collapse. In this paper, we propose $\texttt{ARCO}$, a semi-supervised contrastive learning (CL) framework with stratified group theory for medical image segmentation. In particular, we first propose building $\texttt{ARCO}$ through the concept of variance-reduced estimation, and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks with extremely limited labels. Furthermore, we theoretically prove these sampling techniques are universal in variance reduction. Finally, we experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings, and our methods consistently outperform state-of-the-art semi-supervised methods. Additionally, we augment the CL frameworks with these sampling techniques and demonstrate significant gains over previous methods. We believe our work is an important step towards semi-supervised medical image segmentation by quantifying the limitation of current self-supervision objectives for accomplishing such challenging safety-critical tasks. | Rethinking Semi-Supervised Medical Image Segmentation: A Variance-Reduction Perspective | [
"Chenyu You",
"Weicheng Dai",
"Yifei Min",
"Fenglin Liu",
"David A. Clifton",
"S Kevin Zhou",
"Lawrence Hamilton Staib",
"James s Duncan"
] | Conference | poster | 2302.01735 | [
"https://github.com/charlesyou999648/arco"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=g8bjq0qxOl | @inproceedings{
wang2023where,
title={Where Did I Come From? Origin Attribution of {AI}-Generated Images},
author={Zhenting Wang and Chen Chen and Yi Zeng and Lingjuan Lyu and Shiqing Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=g8bjq0qxOl}
} | Image generation techniques have been gaining increasing attention recently, but concerns have been raised about the potential misuse and intellectual property (IP) infringement associated with image generation models. It is, therefore, necessary to analyze the origin of images by inferring if a specific image was generated by a particular model, i.e., origin attribution. Existing methods only focus on specific types of generative models and require additional procedures during the training phase or generation phase. This makes them unsuitable for pre-trained models that lack these specific operations and may impair generation quality. To address this problem, we first develop an alteration-free and model-agnostic origin attribution method via reverse-engineering on image generation models, i.e., inverting the input of a particular model for a specific image. Given a particular model, we first analyze the differences in the hardness of reverse-engineering tasks for generated samples of the given model and other images. Based on our analysis, we then propose a method that utilizes the reconstruction loss of reverse-engineering to infer the origin. Our proposed method effectively distinguishes between generated images of a specific generative model and other images, i.e., images generated by other models and real images. | Where Did I Come From? Origin Attribution of AI-Generated Images | [
"Zhenting Wang",
"Chen Chen",
"Yi Zeng",
"Lingjuan Lyu",
"Shiqing Ma"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=g78QqvhnDU | @inproceedings{
sujit2023prioritizing,
title={Prioritizing Samples in Reinforcement Learning with Reducible Loss},
author={Shiva Kanth Sujit and Somjit Nath and Pedro Braga and Samira Ebrahimi Kahou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=g78QqvhnDU}
} | Most reinforcement learning algorithms take advantage of an experience replay buffer to repeatedly train on samples the agent has observed in the past. Not all samples carry the same amount of significance and simply assigning equal importance to each of the samples is a naïve strategy. In this paper, we propose a method to prioritize samples based on how much we can learn from a sample. We define the learn-ability of a sample as the steady decrease of the training loss associated with this sample over time. We develop an algorithm to prioritize samples with high learn-ability, while assigning lower priority to those that are hard-to-learn, typically caused by noise or stochasticity. We empirically show that across multiple domains our method is more robust than random sampling and also better than just prioritizing with respect to the training loss, i.e. the temporal difference loss, which is used in prioritized experience replay. | Prioritizing Samples in Reinforcement Learning with Reducible Loss | [
"Shiva Kanth Sujit",
"Somjit Nath",
"Pedro Braga",
"Samira Ebrahimi Kahou"
] | Conference | poster | 2208.10483 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=g6We1SwaY9 | @inproceedings{
li2023blipdiffusion,
title={{BLIP}-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing},
author={Dongxu Li and Junnan Li and Steven Hoi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=g6We1SwaY9}
} | Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts.
Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text.
Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Implementations are available at: https://github.com/salesforce/LAVIS/tree/main/projects/blip-diffusion. | BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing | [
"Dongxu Li",
"Junnan Li",
"Steven Hoi"
] | Conference | poster | 2305.14720 | [
"https://github.com/salesforce/lavis"
] | https://huggingface.co/papers/2305.14720 | 2 | 2 | 0 | 3 | 1 | [
"salesforce/blipdiffusion",
"salesforce/blipdiffusion-controlnet",
"ayushtues/blipdiffusion",
"ayushtues/blipdiffusion-controlnet"
] | [] | [
"BertChristiaens/blip-diffusion"
] |
null | https://openreview.net/forum?id=g49s1N5nmO | @inproceedings{
luo2023transformers,
title={Transformers over Directed Acyclic Graphs},
author={Yuankai Luo and Veronika Thost and Lei Shi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=g49s1N5nmO}
} | Transformer models have recently gained popularity in graph representation learning as they have the potential to learn complex relationships beyond the ones captured by regular graph neural networks.
The main research question is how to inject the structural bias of graphs into the transformer architecture,
and several proposals have been made for undirected molecular graphs and, recently, also for larger network graphs.
In this paper, we study transformers over directed acyclic graphs (DAGs) and propose architecture adaptations tailored to DAGs:
(1) An attention mechanism that is considerably more efficient than the regular quadratic complexity of transformers and at the same time faithfully captures the DAG structure, and (2) a positional encoding of the DAG's partial order, complementing the former.
We rigorously evaluate our approach over various types of tasks, ranging from classifying source code graphs to nodes in citation networks, and show that it is effective in two important aspects: in making graph transformers generally outperform graph neural networks tailored to DAGs and in improving SOTA graph transformer performance in terms of both quality and efficiency. | Transformers over Directed Acyclic Graphs | [
"Yuankai Luo",
"Veronika Thost",
"Lei Shi"
] | Conference | poster | 2210.13148 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=g2ROKOASiv | @inproceedings{
dorner2023incentivizing,
title={Incentivizing Honesty among Competitors in Collaborative Learning and Optimization},
author={Florian E. Dorner and Nikola Konstantinov and Georgi Stoyanov Pashaliev and Martin Vechev},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=g2ROKOASiv}
} | Collaborative learning techniques have the potential to enable training machine learning models that are superior to models trained on a single entity’s data. However, in many cases, potential participants in such collaborative schemes are competitors on a downstream task, such as firms that each aim to attract customers by providing the best recommendations. This can incentivize dishonest updates that damage other participants' models, potentially undermining the benefits of collaboration. In this work, we formulate a game that models such interactions and study two learning tasks within this framework: single-round mean estimation and multi-round SGD on strongly-convex objectives. For a natural class of player actions, we show that rational clients are incentivized to strongly manipulate their updates, preventing learning. We then propose mechanisms that incentivize honest communication and ensure learning quality comparable to full cooperation. Lastly, we empirically demonstrate the effectiveness of our incentive scheme on a standard non-convex federated learning benchmark. Our work shows that explicitly modeling the incentives and actions of dishonest clients, rather than assuming them malicious, can enable strong robustness guarantees for collaborative learning. | Incentivizing Honesty among Competitors in Collaborative Learning and Optimization | [
"Florian E. Dorner",
"Nikola Konstantinov",
"Georgi Stoyanov Pashaliev",
"Martin Vechev"
] | Conference | poster | 2305.16272 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=g27BggUT3L | @inproceedings{
chen2023lart,
title={{LART}: Neural Correspondence Learning with Latent Regularization Transformer for 3D Motion Transfer},
author={Haoyu Chen and Hao Tang and Radu Timofte and Luc Van Gool and Guoying Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=g27BggUT3L}
} | 3D motion transfer aims at transferring the motion from a dynamic input sequence to a static 3D object and outputs an identical motion of the target with high-fidelity and realistic visual effects. In this work, we propose a novel 3D Transformer framework called LART for 3D motion transfer. With carefully-designed architectures, LART is able to implicitly learn the correspondence via a flexible geometry perception. Thus, unlike other existing methods, LART does not require any key point annotations or pre-defined correspondence between the motion source and target meshes and can also handle large-size full-detailed unseen 3D targets. Besides, we introduce a novel latent metric regularization on the Transformer for better motion generation. Our rationale lies in the observation that the decoded motions can be approximately expressed as linearly geometric distortion at the frame level. The metric preservation of motions could be translated to the formation of linear paths in the underlying latent space as a rigorous constraint to control the synthetic motions occurring in the construction of the latent space. The proposed LART shows a high learning efficiency with the need for a few samples from the AMASS dataset to generate motions with plausible visual effects. The experimental results verify the potential of our generative model in applications of motion transfer, content generation, temporal interpolation, and motion denoising. The code is made available: https://github.com/mikecheninoulu/LART. | LART: Neural Correspondence Learning with Latent Regularization Transformer for 3D Motion Transfer | [
"Haoyu Chen",
"Hao Tang",
"Radu Timofte",
"Luc Van Gool",
"Guoying Zhao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=g1dMYenhe4 | @inproceedings{
lin2023mimex,
title={{MIME}x: Intrinsic Rewards from Masked Input Modeling},
author={Toru Lin and Allan Jabri},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=g1dMYenhe4}
} | Exploring in environments with high-dimensional observations is hard. One promising approach for exploration is to use intrinsic rewards, which often boils down to estimating "novelty" of states, transitions, or trajectories with deep networks. Prior works have shown that conditional prediction objectives such as masked autoencoding can be seen as stochastic estimation of pseudo-likelihood. We show how this perspective naturally leads to a unified view on existing intrinsic reward approaches: they are special cases of conditional prediction, where the estimation of novelty can be seen as pseudo-likelihood estimation with different mask distributions. From this view, we propose a general framework for deriving intrinsic rewards -- Masked Input Modeling for Exploration (MIMEx) -- where the mask distribution can be flexibly tuned to control the difficulty of the underlying conditional prediction task. We demonstrate that MIMEx can achieve superior results when compared against competitive baselines on a suite of challenging sparse-reward visuomotor tasks. | MIMEx: Intrinsic Rewards from Masked Input Modeling | [
"Toru Lin",
"Allan Jabri"
] | Conference | poster | 2305.08932 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fze7P9oy6l | @inproceedings{
mao2023supported,
title={Supported Value Regularization for Offline Reinforcement Learning},
author={Yixiu Mao and Hongchang Zhang and Chen Chen and Yi Xu and Xiangyang Ji},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fze7P9oy6l}
} | Offline reinforcement learning suffers from the extrapolation error and value overestimation caused by out-of-distribution (OOD) actions. To mitigate this issue, value regularization approaches aim to penalize the learned value functions to assign lower values to OOD actions. However, existing value regularization methods lack a proper distinction between the regularization effects on in-distribution (ID) and OOD actions, and fail to guarantee optimal convergence results of the policy. To this end, we propose Supported Value Regularization (SVR), which penalizes the Q-values for all OOD actions while maintaining standard Bellman updates for ID ones. Specifically, we utilize the bias of importance sampling to compute the summation of Q-values over the entire OOD region, which serves as the penalty for policy evaluation. This design automatically separates the regularization for ID and OOD actions without manually distinguishing between them. In tabular MDP, we show that the policy evaluation operator of SVR is a contraction, whose fixed point outputs unbiased Q-values for ID actions and underestimated Q-values for OOD actions. Furthermore, the policy iteration with SVR guarantees strict policy improvement until convergence to the optimal support-constrained policy in the dataset. Empirically, we validate the theoretical properties of SVR in a tabular maze environment and demonstrate its state-of-the-art performance on a range of continuous control tasks in the D4RL benchmark. | Supported Value Regularization for Offline Reinforcement Learning | [
"Yixiu Mao",
"Hongchang Zhang",
"Chen Chen",
"Yi Xu",
"Xiangyang Ji"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fyfmHi8ay3 | @inproceedings{
uzolas2023templatefree,
title={Template-free Articulated Neural Point Clouds for Reposable View Synthesis},
author={Lukas Uzolas and Elmar Eisemann and Petr Kellnhofer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fyfmHi8ay3}
} | Dynamic Neural Radiance Fields (NeRFs) achieve remarkable visual quality when synthesizing novel views of time-evolving 3D scenes. However, the common reliance on backward deformation fields makes reanimation of the captured object poses challenging. Moreover, the state of the art dynamic models are often limited by low visual fidelity, long reconstruction time or specificity to narrow application domains. In this paper, we present a novel method utilizing a point-based representation and Linear Blend Skinning (LBS) to jointly learn a Dynamic NeRF and an associated skeletal model from even sparse multi-view video. Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses while significantly reducing the necessary learning time when compared to existing work. We demonstrate the versatility of our representation on a variety of articulated objects from common datasets and obtain reposable 3D reconstructions without the need of object-specific skeletal templates. | Template-free Articulated Neural Point Clouds for Reposable View Synthesis | [
"Lukas Uzolas",
"Elmar Eisemann",
"Petr Kellnhofer"
] | Conference | poster | 2305.19065 | [
"https://github.com/lukasuz/articulated-point-nerf"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fyLvHzEssH | @inproceedings{
seleznova2023neural,
title={Neural (Tangent Kernel) Collapse},
author={Mariia Seleznova and Dana Weitzner and Raja Giryes and Gitta Kutyniok and Hung-Hsu Chou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fyLvHzEssH}
} | This work bridges two important concepts: the Neural Tangent Kernel (NTK), which captures the evolution of deep neural networks (DNNs) during training, and the Neural Collapse (NC) phenomenon, which refers to the emergence of symmetry and structure in the last-layer features of well-trained classification DNNs. We adopt the natural assumption that the empirical NTK develops a block structure aligned with the class labels, i.e., samples within the same class have stronger correlations than samples from different classes. Under this assumption, we derive the dynamics of DNNs trained with mean squared (MSE) loss and break them into interpretable phases. Moreover, we identify an invariant that captures the essence of the dynamics, and use it to prove the emergence of NC in DNNs with block-structured NTK. We provide large-scale numerical experiments on three common DNN architectures and three benchmark datasets to support our theory. | Neural (Tangent Kernel) Collapse | [
"Mariia Seleznova",
"Dana Weitzner",
"Raja Giryes",
"Gitta Kutyniok",
"Hung-Hsu Chou"
] | Conference | poster | 2305.16427 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fxNQJVMwK2 | @inproceedings{
clark2023texttoimage,
title={Text-to-Image Diffusion Models are Zero Shot Classifiers},
author={Kevin Clark and Priyank Jaini},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fxNQJVMwK2}
} | The excellent generative capabilities of text-to-image diffusion models suggest they learn informative representations of image-text data.
However, what knowledge their representations capture is not fully understood, and they have not been thoroughly explored on downstream tasks.
We investigate diffusion models by proposing a method for evaluating them as zero-shot classifiers.
The key idea is using a diffusion model's ability to denoise a noised image given a text description of a label as a proxy for that label's likelihood.
We apply our method to Stable Diffusion and Imagen, using it to probe fine-grained aspects of the models' knowledge and comparing them with CLIP's zero-shot abilities.
They perform competitively with CLIP on a wide range of zero-shot image classification datasets.
Additionally, they achieve state-of-the-art results on shape/texture bias tests and can successfully perform attribute binding while CLIP cannot.
Although generative pre-training is prevalent in NLP, visual foundation models often use other methods such as contrastive learning.
Based on our findings, we argue that generative pre-training should be explored as a compelling alternative for vision and vision-language problems. | Text-to-Image Diffusion Models are Zero Shot Classifiers | [
"Kevin Clark",
"Priyank Jaini"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fwvfxDbUFw | @inproceedings{
wu2023learning,
title={Learning Score-based Grasping Primitive for Human-assisting Dexterous Grasping},
author={Tianhao Wu and Mingdong Wu and Jiyao Zhang and Yunchong Gan and Hao Dong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fwvfxDbUFw}
} | The use of anthropomorphic robotic hands for assisting individuals in situations where human hands may be unavailable or unsuitable has gained significant importance. In this paper, we propose a novel task called human-assisting dexterous grasping that aims to train a policy for controlling a robotic hand's fingers to assist users in grasping objects. Unlike conventional dexterous grasping, this task presents a more complex challenge as the policy needs to adapt to diverse user intentions, in addition to the object's geometry. We address this challenge by proposing an approach consisting of two sub-modules: a hand-object-conditional grasping primitive called Grasping Gradient Field (GraspGF), and a history-conditional residual policy. GraspGF learns 'how' to grasp by estimating the gradient of a synthesised success grasping example set, while the residual policy determines 'when' and at what speed the grasping action should be executed based on the trajectory history. Experimental results demonstrate the superiority of our proposed method compared to baselines, highlighting the user-awareness and practicality in real-world applications. The codes and demonstrations can be viewed at https://sites.google.com/view/graspgf. | Learning Score-based Grasping Primitive for Human-assisting Dexterous Grasping | [
"Tianhao Wu",
"Mingdong Wu",
"Jiyao Zhang",
"Yunchong Gan",
"Hao Dong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fvm9jVcpBn | @inproceedings{
manam2023sensitivity,
title={Sensitivity in Translation Averaging},
author={Lalit Manam and Venu Madhav Govindu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fvm9jVcpBn}
} | In 3D computer vision, translation averaging solves for absolute translations given a set of pairwise relative translation directions. While there has been much work on robustness to outliers and studies on the uniqueness of the solution, this paper deals with a distinctly different problem of sensitivity in translation averaging under uncertainty. We first analyze sensitivity in estimating scales corresponding to relative directions under small perturbations of the relative directions. Then, we formally define the conditioning of the translation averaging problem, which assesses the reliability of estimated translations based solely on the input directions. We give a sufficient criterion to ensure that the problem is well-conditioned. Subsequently, we provide an efficient algorithm to identify and remove combinations of directions which make the problem ill-conditioned while ensuring uniqueness of the solution. We demonstrate the utility of such analysis in global structure-from-motion pipelines for obtaining 3D reconstructions, which reveals the benefits of filtering the ill-conditioned set of directions in translation averaging in terms of reduced translation errors, a higher number of 3D points triangulated and faster convergence of bundle adjustment. | Sensitivity in Translation Averaging | [
"Lalit Manam",
"Venu Madhav Govindu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ftPoVcm821 | @inproceedings{
wang2023democode,
title={Demo2Code: From Summarizing Demonstrations to Synthesizing Code via Extended Chain-of-Thought},
author={Huaxiaoyue Wang and Gonzalo Gonzalez-Pumariega and Yash Sharma and Sanjiban Choudhury},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ftPoVcm821}
} | Language instructions and demonstrations are two natural ways for users to teach robots personalized tasks. Recent progress in Large Language Models (LLMs) has shown impressive performance in translating language instructions into code for robotic tasks. However, translating demonstrations into task code continues to be a challenge due to the length and complexity of both demonstrations and code, making learning a direct mapping intractable. This paper presents Demo2Code, a novel framework that generates robot task code from demonstrations via an extended chain-of-thought and defines a common latent specification to connect the two. Our framework employs a robust two-stage process: (1) a recursive summarization technique that condenses demonstrations into concise specifications, and (2) a code synthesis approach that expands each function recursively from the generated specifications. We conduct extensive evaluation on various robot task benchmarks, including a novel game benchmark Robotouille, designed to simulate diverse cooking tasks in a kitchen environment. | Demo2Code: From Summarizing Demonstrations to Synthesizing Code via Extended Chain-of-Thought | [
"Huaxiaoyue Wang",
"Gonzalo Gonzalez-Pumariega",
"Yash Sharma",
"Sanjiban Choudhury"
] | Conference | poster | [
""
] | https://huggingface.co/papers/2305.16744 | 2 | 1 | 0 | 4 | 1 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fsCcGr8YFR | @inproceedings{
gu2023uenerfneural,
title={{UE}4-Ne{RF}:Neural Radiance Field for Real-Time Rendering of Large-Scale Scene},
author={Jiaming Gu and Minchao Jiang and Hongsheng Li and Xiaoyuan Lu and Guangming Zhu and Syed Afaq Ali Shah and Liang Zhang and Mohammed Bennamoun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fsCcGr8YFR}
} | Neural Radiance Fields (NeRF) is a novel implicit 3D reconstruction method that shows immense potential and has been gaining increasing attention. It enables the reconstruction of 3D scenes solely from a set of photographs. However, its real-time rendering capability, especially for interactive real-time rendering of large-scale scenes, still has significant limitations. To address these challenges, in this paper, we propose a novel neural rendering system called UE4-NeRF, specifically designed for real-time rendering of large-scale scenes. We partitioned each large scene into different sub-NeRFs. In order to represent the partitioned independent scene, we initialize polygonal meshes by constructing multiple regular octahedra within the scene and the vertices of the polygonal faces are continuously optimized during the training process. Drawing inspiration from Level of Detail (LOD) techniques, we trained meshes of varying levels of detail for different observation levels. Our approach combines with the rasterization pipeline in Unreal Engine 4 (UE4), achieving real-time rendering of large-scale scenes at 4K resolution with a frame rate of up to 43 FPS. Rendering within UE4 also facilitates scene editing in subsequent stages. Furthermore, through experiments, we have demonstrated that our method achieves rendering quality comparable to state-of-the-art approaches. Project page: https://jamchaos.github.io/UE4-NeRF/. | UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale Scene | [
"Jiaming Gu",
"Minchao Jiang",
"Hongsheng Li",
"Xiaoyuan Lu",
"Guangming Zhu",
"Syed Afaq Ali Shah",
"Liang Zhang",
"Mohammed Bennamoun"
] | Conference | poster | 2310.13263 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=frVo9MzRuU | @inproceedings{
okawa2023compositional,
title={Compositional Abilities Emerge Multiplicatively: Exploring Diffusion Models on a Synthetic Task},
author={Maya Okawa and Ekdeep Singh Lubana and Robert P. Dick and Hidenori Tanaka},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=frVo9MzRuU}
} | Modern generative models exhibit unprecedented capabilities to generate extremely realistic data. However, given the inherent compositionality of the real world, reliable use of these models in practical applications requires that they exhibit the capability to compose a novel set of concepts to generate outputs not seen in the training data set. Prior work demonstrates that recent diffusion models do exhibit intriguing compositional generalization abilities, but also fail unpredictably. Motivated by this, we perform a controlled study for understanding compositional generalization in conditional diffusion models in a synthetic setting, varying different attributes of the training data and measuring the model's ability to generate samples out-of-distribution. Our results show: (i) the order in which the ability to generate samples from a concept and compose them emerges is governed by the structure of the underlying data-generating process; (ii) performance on compositional tasks exhibits a sudden "emergence" due to multiplicative reliance on the performance of constituent tasks, partially explaining emergent phenomena seen in generative models; and (iii) composing concepts with lower frequency in the training data to generate out-of-distribution samples requires considerably more optimization steps compared to generating in-distribution samples. Overall, our study lays a foundation for understanding emergent capabilities and compositionality in generative models from a data-centric perspective. | Compositional Abilities Emerge Multiplicatively: Exploring Diffusion Models on a Synthetic Task | [
"Maya Okawa",
"Ekdeep Singh Lubana",
"Robert P. Dick",
"Hidenori Tanaka"
] | Conference | poster | 2310.09336 | [
"https://github.com/phys-ai/concept_graphs"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=frSfSaRGXY | @inproceedings{
shiragur2023meek,
title={Meek Separators and Their Applications in Targeted Causal Discovery},
author={Kirankumar Shiragur and Jiaqi Zhang and Caroline Uhler},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=frSfSaRGXY}
} | Learning causal structures from interventional data is a fundamental problem with broad applications across various fields. While many previous works have focused on recovering the entire causal graph, in practice, there are scenarios where learning only part of the causal graph suffices. This is called \emph{targeted} causal discovery. In our work, we focus on two such well-motivated problems: subset search and causal matching. We aim to minimize the number of interventions in both cases.
Towards this, we introduce the \emph{Meek separator}, which is a subset of vertices that, when intervened, decomposes the remaining unoriented edges into smaller connected components. We then present an efficient algorithm to find Meek separators that are of small sizes. Such a procedure is helpful in designing various divide-and-conquer-based approaches. In particular, we propose two randomized algorithms that achieve logarithmic approximation for subset search and causal matching, respectively. Our results provide the first known average-case provable guarantees for both problems. We believe that this opens up possibilities to design near-optimal methods for many other targeted causal structure learning problems arising from various applications. | Meek Separators and Their Applications in Targeted Causal Discovery | [
"Kirankumar Shiragur",
"Jiaqi Zhang",
"Caroline Uhler"
] | Conference | poster | 2310.20075 | [
"https://github.com/uhlerlab/meek_sep"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=frHPeRedHo | @inproceedings{
gollakota2023agnostically,
title={Agnostically Learning Single-Index Models using Omnipredictors},
author={Aravind Gollakota and Parikshit Gopalan and Adam Klivans and Konstantinos Stavropoulos},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=frHPeRedHo}
} | We give the first result for agnostically learning Single-Index Models (SIMs) with arbitrary monotone and Lipschitz activations. All prior work either held only in the realizable setting or required the activation to be known. Moreover, we only require the marginal to have bounded second moments, whereas all prior work required stronger distributional assumptions (such as anticoncentration or boundedness). Our algorithm is based on recent work by Gopalan et al. [2023] on Omniprediction using predictors satisfying calibrated multiaccuracy. Our analysis is simple and relies on the relationship between Bregman divergences (or matching losses) and $\ell_p$ distances. We also provide new guarantees for standard algorithms like GLMtron and logistic regression in the agnostic setting. | Agnostically Learning Single-Index Models using Omnipredictors | [
"Aravind Gollakota",
"Parikshit Gopalan",
"Adam Klivans",
"Konstantinos Stavropoulos"
] | Conference | poster | 2306.10615 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fpzA8uRA95 | @inproceedings{
xu2023efficient,
title={Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection},
author={Xilie Xu and Jingfeng Zhang and Feng Liu and Masashi Sugiyama and Mohan Kankanhalli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fpzA8uRA95}
} | Adversarial contrastive learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks and also generalizes to a wide range of downstream tasks. However, ACL needs tremendous running time to generate the adversarial variants of all training data, which limits its scalability to large datasets. To speed up ACL, this paper proposes a robustness-aware coreset selection (RCS) method. RCS does not require label information and searches for an informative subset that minimizes a representational divergence, which is the distance of the representation between natural data and their virtual adversarial variants. The vanilla solution of RCS via traversing all possible subsets is computationally prohibitive. Therefore, we theoretically transform RCS into a surrogate problem of submodular maximization, of which the greedy search is an efficient solution with an optimality guarantee for the original problem. Empirically, our comprehensive results corroborate that RCS can speed up ACL by a large margin without significantly hurting the robustness transferability. Notably, to the best of our knowledge, we are the first to conduct ACL efficiently on the large-scale ImageNet-1K dataset to obtain an effective robust representation via RCS. Our source code is at https://github.com/GodXuxilie/Efficient_ACL_via_RCS. | Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection | [
"Xilie Xu",
"Jingfeng Zhang",
"Feng Liu",
"Masashi Sugiyama",
"Mohan Kankanhalli"
] | Conference | spotlight | 2302.03857 | [
"https://github.com/godxuxilie/efficient_acl_via_rcs"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fpHfRD3f4N | @inproceedings{
huang2023tackling,
title={Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds},
author={Jiayi Huang and Han Zhong and Liwei Wang and Lin Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fpHfRD3f4N}
} | While numerous works have focused on devising efficient algorithms for reinforcement learning (RL) with uniformly bounded rewards, it remains an open question whether sample or time-efficient algorithms for RL with large state-action space exist when the rewards are \emph{heavy-tailed}, i.e., with only finite $(1+\epsilon)$-th moments for some $\epsilon\in(0,1]$. In this work, we address the challenge of such rewards in RL with linear function approximation. We first design an algorithm, \textsc{Heavy-OFUL}, for heavy-tailed linear bandits, achieving an \emph{instance-dependent} $T$-round regret of $\tilde{O}\big(d T^{\frac{1-\epsilon}{2(1+\epsilon)}} \sqrt{\sum_{t=1}^T \nu_t^2} + d T^{\frac{1-\epsilon}{2(1+\epsilon)}}\big)$, the \emph{first} of this kind. Here, $d$ is the feature dimension, and $\nu_t^{1+\epsilon}$ is the $(1+\epsilon)$-th central moment of the reward at the $t$-th round. We further show the above bound is minimax optimal when applied to the worst-case instances in stochastic and deterministic linear bandits. We then extend this algorithm to the RL settings with linear function approximation. Our algorithm, termed as \textsc{Heavy-LSVI-UCB}, achieves the \emph{first} computationally efficient \emph{instance-dependent} $K$-episode regret of $\tilde{O}(d \sqrt{H \mathcal{U}^*} K^\frac{1}{1+\epsilon} + d \sqrt{H \mathcal{V}^* K})$. Here, $H$ is length of the episode, and $\mathcal{U}^*, \mathcal{V}^*$ are instance-dependent quantities scaling with the central moment of reward and value functions, respectively. We also provide a matching minimax lower bound $\Omega(d H K^{\frac{1}{1+\epsilon}} + d \sqrt{H^3 K})$ to demonstrate the optimality of our algorithm in the worst case. Our result is achieved via a novel robust self-normalized concentration inequality that may be of independent interest in handling heavy-tailed noise in general online regression problems. | Tackling Heavy-Tailed Rewards in Reinforcement Learning with Function Approximation: Minimax Optimal and Instance-Dependent Regret Bounds | [
"Jiayi Huang",
"Han Zhong",
"Liwei Wang",
"Lin Yang"
] | Conference | poster | 2306.06836 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fpElyckKkd | @inproceedings{
zhang2023datainformed,
title={Data-Informed Geometric Space Selection},
author={Shuai Zhang and Wenqi Jiang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fpElyckKkd}
} | Geometric representation learning (e.g., hyperbolic and spherical geometry) has proven to be efficacious in solving many intricate machine learning tasks. The fundamental challenge of geometric representation learning lies in aligning the inherent geometric bias with the underlying structure of the data, which is a rarely explored topic in the literature. Existing methods heavily rely on heuristic assumptions on the data structure to decide the type of geometry to be adopted, which often leads to suboptimal performance. This work aims to automate the alignment process via a data-informed strategy such that we optimize model performance with minimal overhead. Specifically, a sparse gating mechanism is employed to enable each input data point $\mathit{p}$ to select $K$ geometric spaces from a given candidate geometric space pool with $N$ ($K<N$) spaces of different geometry. The selected $K$ spaces are then tightly integrated to formulate a Cartesian product space, which is leveraged to process this input data $\mathit{p}$. In doing so, each input data is processed by the spaces it selected with maximum specialization. We empirically show that this method can effectively align data and spaces without human interventions and further boost performance on real-world tasks, demonstrating its potential in eliciting the expressive power of geometric representations and practical usability. | Data-Informed Geometric Space Selection | [
"Shuai Zhang",
"Wenqi Jiang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fmYmXNPmhv | @inproceedings{
zhou2023permutation,
title={Permutation Equivariant Neural Functionals},
author={Allan Zhou and Kaien Yang and Kaylee Burns and Adriano Cardace and Yiding Jiang and Samuel Sokota and J Zico Kolter and Chelsea Finn},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fmYmXNPmhv}
} | This work studies the design of neural networks that can process the weights or gradients of other neural networks, which we refer to as *neural functional networks* (NFNs). Despite a wide range of potential applications, including learned optimization, processing implicit neural representations, network editing, and policy evaluation, there are few unifying principles for designing effective architectures that process the weights of other networks. We approach the design of neural functionals through the lens of symmetry, in particular by focusing on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order. We introduce a framework for building *permutation equivariant* neural functionals, whose architectures encode these symmetries as an inductive bias. The key building blocks of this framework are *NF-Layers* (neural functional layers) that we constrain to be permutation equivariant through an appropriate parameter sharing scheme. In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks that require processing the weights of MLPs and CNNs, such as predicting classifier generalization, producing "winning ticket" sparsity masks for initializations, and classifying or editing implicit neural representations (INRs). In addition, we provide code for our models and experiments at https://github.com/AllanYangZhou/nfn. | Permutation Equivariant Neural Functionals | [
"Allan Zhou",
"Kaien Yang",
"Kaylee Burns",
"Adriano Cardace",
"Yiding Jiang",
"Samuel Sokota",
"J Zico Kolter",
"Chelsea Finn"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fmJv8Hj0yo | @inproceedings{
krojer2023are,
title={Are Diffusion Models Vision-And-Language Reasoners?},
author={Benno Krojer and Elinor Poole-Dayan and Vikram Voleti and Christopher Pal and Siva Reddy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fmJv8Hj0yo}
} | Text-conditioned image generation models have recently shown immense qualitative success using denoising diffusion processes. However, unlike discriminative vision-and-language models, it is a non-trivial task to subject these diffusion-based generative models to automatic fine-grained quantitative evaluation of high-level phenomena such as compositionality.
Towards this goal, we perform two innovations. First, we transform diffusion-based models (in our case, Stable Diffusion) for any image-text matching (ITM) task using a novel method called DiffusionITM.
Second, we introduce the Generative-Discriminative Evaluation Benchmark (GDBench) benchmark with 7 complex vision-and-language tasks, bias evaluation and detailed analysis.
We find that Stable Diffusion + DiffusionITM is competitive on many tasks and outperforms CLIP on compositional tasks like like CLEVR and Winoground.
We further boost its compositional performance with a transfer setup by fine-tuning on MS-COCO while retaining generative capabilities.
We also measure the stereotypical bias in diffusion models, and find that Stable Diffusion 2.1 is, for the most part, less biased than Stable Diffusion 1.5.
Overall, our results point in an exciting direction bringing discriminative and generative model evaluation closer. We will release code and benchmark setup soon. | Are Diffusion Models Vision-And-Language Reasoners? | [
"Benno Krojer",
"Elinor Poole-Dayan",
"Vikram Voleti",
"Christopher Pal",
"Siva Reddy"
] | Conference | poster | 2305.16397 | [
"https://github.com/mcgill-nlp/diffusion-itm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fljrZsJ2I8 | @inproceedings{
tang2023prototypical,
title={Prototypical Variational Autoencoder for 3D Few-shot Object Detection},
author={Weiliang Tang and Biqi YANG and Xianzhi Li and Yun-Hui Liu and Pheng-Ann Heng and Chi-Wing Fu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fljrZsJ2I8}
} | Few-Shot 3D Point Cloud Object Detection (FS3D) is a challenging task, aiming to detect 3D objects of novel classes using only limited annotated samples for training. Considering that the detection performance highly relies on the quality of the latent features, we design a VAE-based prototype learning scheme, named prototypical VAE (P-VAE), to learn a probabilistic latent space for enhancing the diversity and distinctiveness of the sampled features. The network encodes a multi-center GMM-like posterior, in which each distribution centers at a prototype. For regularization, P-VAE incorporates a reconstruction task to preserve geometric information. To adopt P-VAE for the detection framework, we formulate Geometric-informative Prototypical VAE (GP-VAE) to handle varying geometric components and Class-specific Prototypical VAE (CP-VAE) to handle varying object categories. In the first stage, we harness GP-VAE to aid feature extraction from the input scene. In the second stage, we cluster the geometric-informative features into per-instance features and use CP-VAE to refine each instance feature with category-level guidance. Experimental results show the top performance of our approach over the state of the arts on two FS3D benchmarks. Quantitative ablations and qualitative prototype analysis further demonstrate that our probabilistic modeling can significantly boost prototype learning for FS3D. | Prototypical Variational Autoencoder for 3D Few-shot Object Detection | [
"Weiliang Tang",
"Biqi YANG",
"Xianzhi Li",
"Yun-Hui Liu",
"Pheng-Ann Heng",
"Chi-Wing Fu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fjXTcUUgaC | @inproceedings{
zhang2023policy,
title={Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data},
author={Ruiqi Zhang and Andrea Zanette},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fjXTcUUgaC}
} | In some applications of reinforcement learning,
a dataset of pre-collected experience is already available
but it is also possible to acquire some additional online data to help improve the quality of the policy.
However, it may be preferable to gather additional data with a single, non-reactive exploration policy
and avoid the engineering costs associated with switching policies.
In this paper we propose an algorithm with provable guarantees
that can leverage an offline dataset to design a single non-reactive policy for exploration.
We theoretically analyze the algorithm and measure the quality of the final policy
as a function of the local coverage of the original dataset and the amount of additional data collected. | Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data | [
"Ruiqi Zhang",
"Andrea Zanette"
] | Conference | poster | 2307.04354 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fj0ZeRtUTU | @inproceedings{
kim2023bootstrapped,
title={Bootstrapped Training of Score-Conditioned Generator for Offline Design of Biological Sequences},
author={Minsu Kim and Federico Berto and Sungsoo Ahn and Jinkyoo Park},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fj0ZeRtUTU}
} | We study the problem of optimizing biological sequences, e.g., proteins, DNA, and RNA, to maximize a black-box score function that is only evaluated in an offline dataset. We propose a novel solution, bootstrapped training of score-conditioned generator (BootGen) algorithm. Our algorithm repeats a two-stage process. In the first stage, our algorithm trains the biological sequence generator with rank-based weights to enhance the accuracy of sequence generation based on high scores. The subsequent stage involves bootstrapping, which augments the training dataset with self-generated data labeled by a proxy score function. Our key idea is to align the score-based generation with a proxy score function, which distills the knowledge of the proxy score function to the generator. After training, we aggregate samples from multiple bootstrapped generators and proxies to produce a diverse design. Extensive experiments show that our method outperforms competitive baselines on biological sequential design tasks. We provide reproducible source code: https://github.com/kaist-silab/bootgen. | Bootstrapped Training of Score-Conditioned Generator for Offline Design of Biological Sequences | [
"Minsu Kim",
"Federico Berto",
"Sungsoo Ahn",
"Jinkyoo Park"
] | Conference | poster | 2306.03111 | [
"https://github.com/kaist-silab/bootgen"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fg7iyNK81W | @inproceedings{
l{\"o}we2023rotating,
title={Rotating Features for Object Discovery},
author={Sindy L{\"o}we and Phillip Lippe and Francesco Locatello and Max Welling},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fg7iyNK81W}
} | The binding problem in human cognition, concerning how the brain represents and connects objects within a fixed network of neural connections, remains a subject of intense debate. Most machine learning efforts addressing this issue in an unsupervised setting have focused on slot-based methods, which may be limiting due to their discrete nature and difficulty to express uncertainty. Recently, the Complex AutoEncoder was proposed as an alternative that learns continuous and distributed object-centric representations. However, it is only applicable to simple toy data. In this paper, we present Rotating Features, a generalization of complex-valued features to higher dimensions, and a new evaluation procedure for extracting objects from distributed representations. Additionally, we show the applicability of our approach to pre-trained features. Together, these advancements enable us to scale distributed object-centric representations from simple toy to real-world data. We believe this work advances a new paradigm for addressing the binding problem in machine learning and has the potential to inspire further innovation in the field. | Rotating Features for Object Discovery | [
"Sindy Löwe",
"Phillip Lippe",
"Francesco Locatello",
"Max Welling"
] | Conference | oral | 2306.00600 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ffOhY40Nrh | @inproceedings{
nayebi2023neural,
title={Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes},
author={Aran Nayebi and Rishi Rajalingham and Mehrdad Jazayeri and Guangyu Robert Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ffOhY40Nrh}
} | Humans and animals have a rich and flexible understanding of the physical world, which enables them to infer the underlying dynamical trajectories of objects and events, plausible future states, and use that to plan and anticipate the consequences of actions.
However, the neural mechanisms underlying these computations are unclear.
We combine a goal-driven modeling approach with dense neurophysiological data and high-throughput human behavioral readouts that contain thousands of comparisons to directly impinge on this question.
Specifically, we construct and evaluate several classes of sensory-cognitive networks to predict the future state of rich, ethologically-relevant environments, ranging from self-supervised end-to-end models with pixel-wise or object-slot objectives, to models that future predict in the latent space of purely static image-pretrained or dynamic video-pretrained foundation models.
We find that ``scale is \emph{not} all you need'', and that many state-of-the-art machine learning models fail to perform well on our neural and behavioral benchmarks for future prediction.
In fact, only one class of models matches these data well overall.
We find that neural responses are currently best predicted by models trained to predict the future state of their environment in the \emph{latent} space of pretrained foundation models optimized for \emph{dynamic} scenes in a self-supervised manner.
These models also approach the neurons' ability to predict the environmental state variables that are visually hidden from view, despite not being explicitly trained to do so.
Finally, we find that not all foundation model latents are equal.
Notably, models that future predict in the latent space of video foundation models that are optimized to support a \emph{diverse} range of egocentric sensorimotor tasks, reasonably match \emph{both} human behavioral error patterns and neural dynamics across all environmental scenarios that we were able to test.
Overall, these findings suggest that the neural mechanisms and behaviors of primate mental simulation have strong inductive biases associated with them, and are thus far most consistent with being optimized to future predict on \emph{reusable} visual representations that are useful for Embodied AI more generally. | Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes | [
"Aran Nayebi",
"Rishi Rajalingham",
"Mehrdad Jazayeri",
"Guangyu Robert Yang"
] | Conference | spotlight | 2305.11772 | [
"https://github.com/anayebi/mental-sim"
] | https://huggingface.co/papers/2305.11772 | 1 | 0 | 0 | 4 | 1 | [
"anayebi/mental-sim-models"
] | [] | [] |
null | https://openreview.net/forum?id=ffFcRPpnWx | @inproceedings{
huang2023rsdel,
title={{RS}-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion},
author={Zhuoqun Huang and Neil G Marchant and Keane Lucas and Lujo Bauer and Olga Ohrimenko and Benjamin I. P. Rubinstein},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ffFcRPpnWx}
} | Randomized smoothing is a leading approach for constructing classifiers that are certifiably robust against adversarial examples. Existing work on randomized smoothing has focused on classifiers with continuous inputs, such as images, where $\ell_p$-norm bounded adversaries are commonly studied. However, there has been limited work for classifiers with discrete or variable-size inputs, such as for source code, which require different threat models and smoothing mechanisms. In this work, we adapt randomized smoothing for discrete sequence classifiers to provide certified robustness against edit distance-bounded adversaries. Our proposed smoothing mechanism randomized deletion (RS-Del) applies random deletion edits, which are (perhaps surprisingly) sufficient to confer robustness against adversarial deletion, insertion and substitution edits. Our proof of certification deviates from the established Neyman-Pearson approach, which is intractable in our setting, and is instead organized around longest common subsequences. We present a case study on malware detection—a binary classification problem on byte sequences where classifier evasion is a well-established threat model. When applied to the popular MalConv malware detection model, our smoothing mechanism RS-Del achieves a certified accuracy of 91% at an edit distance radius of 128 bytes. | RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion | [
"Zhuoqun Huang",
"Neil G Marchant",
"Keane Lucas",
"Lujo Bauer",
"Olga Ohrimenko",
"Benjamin I. P. Rubinstein"
] | Conference | poster | 2302.01757 | [
"https://github.com/dovermore/randomized-deletion"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fezV91IJIo | @inproceedings{
jiang2023analysis,
title={Analysis of Variance of Multiple Causal Networks},
author={Zhongli Jiang and Dabao Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fezV91IJIo}
} | Constructing a directed cyclic graph (DCG) is challenged by both algorithmic difficulty and computational burden. Comparing multiple DCGs is even more difficult, compounded by the need to identify dynamic causalities across graphs. We propose to unify multiple DCGs with a single structural model and develop a limited-information-based method to simultaneously construct multiple networks and infer their disparities, which can be visualized by appropriate correspondence analysis. The algorithm provides DCGs with robust non-asymptotic theoretical properties. It is designed with two sequential stages, each of which involves parallel computation tasks that are scalable to the network complexity. Taking advantage of high-performance clusters, our method makes it possible to evaluate the statistical significance of DCGs using the bootstrap method. We demonstrated the effectiveness of our method by applying it to synthetic and real datasets. | Analysis of Variance of Multiple Causal Networks | [
"Zhongli Jiang",
"Dabao Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fem6BIJkdv | @inproceedings{
silva2023representation,
title={Representation Learning via Consistent Assignment of Views over Random Partitions},
author={Thalles Santos Silva and Ad{\'\i}n Ram{\'\i}rez Rivera},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fem6BIJkdv}
} | We present Consistent Assignment of Views over Random Partitions (CARP), a self-supervised clustering method for representation learning of visual features. CARP learns prototypes in an end-to-end online fashion using gradient descent without additional non-differentiable modules to solve the cluster assignment problem. CARP optimizes a new pretext task based on random partitions of prototypes that regularizes the model and enforces consistency between views' assignments. Additionally, our method improves training stability and prevents collapsed solutions in joint-embedding training. Through an extensive evaluation, we demonstrate that CARP's representations are suitable for learning downstream tasks. We evaluate CARP's representations capabilities in 17 datasets across many standard protocols, including linear evaluation, few-shot classification, $k$-NN, $k$-means, image retrieval, and copy detection. We compare CARP performance to 11 existing self-supervised methods. We extensively ablate our method and demonstrate that our proposed random partition pretext task improves the quality of the learned representations by devising multiple random classification tasks.
In transfer learning tasks, CARP achieves the best performance on average against many SSL methods trained for a longer time. | Representation Learning via Consistent Assignment of Views over Random Partitions | [
"Thalles Santos Silva",
"Adín Ramírez Rivera"
] | Conference | poster | 2310.12692 | [
"https://github.com/sthalles/carp"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fcYObrixSS | @inproceedings{
liu2023lepard,
title={{LEPARD}: Learning Explicit Part Discovery for 3D Articulated Shape Reconstruction},
author={Di Liu and Anastasis Stathopoulos and Qilong Zhangli and Yunhe Gao and Dimitris N. Metaxas},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fcYObrixSS}
} | Reconstructing the 3D articulated shape of an animal from a single in-the-wild image is a challenging task. We propose LEPARD, a learning-based framework that discovers semantically meaningful 3D parts and reconstructs 3D shapes in a part-based manner. This is advantageous as 3D parts are robust to pose variations due to articulations and their shape is typically simpler than the overall shape of the object. In our framework, the parts are explicitly represented as parameterized primitive surfaces with global and local deformations in 3D that deform to match the image evidence. We propose a kinematics-inspired optimization to guide each transformation of the primitive deformation given 2D evidence. Similar to recent approaches, LEPARD is only trained using off-the-shelf deep features from DINO and does not require any form of 2D or 3D annotations. Experiments on 3D animal shape reconstruction, demonstrate significant improvement over existing alternatives in terms of both the overall reconstruction performance as well as the ability to discover semantically meaningful and consistent parts. | LEPARD: Learning Explicit Part Discovery for 3D Articulated Shape Reconstruction | [
"Di Liu",
"Anastasis Stathopoulos",
"Qilong Zhangli",
"Yunhe Gao",
"Dimitris N. Metaxas"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fbpTObq6TW | @inproceedings{
imanishi2023a,
title={A fast heuristic to optimize time-space tradeoff for large models},
author={Akifumi Imanishi and Zijian Xu and Masayuki Takagi and Sixue Wang and Emilio Castillo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fbpTObq6TW}
} | Training large-scale neural networks is heavily constrained by GPU memory. In order to circumvent this limitation, gradient checkpointing, or recomputation is a powerful technique. There is active research in this area with methods such as Checkmake or Moccasin. However, both Checkmate and Moccasin rely on mixed integer linear programming or constraint programming, resulting in limited scalability due to their exponentially large search space.
This paper proposes a novel algorithm for recomputation (FastSA) based on a simulated annealing heuristic that achieves comparable or even better solutions than state-of-the-art alternatives. FastSA can optimize computational graphs with thousands of nodes within 3 to 30 seconds, several orders of magnitude faster than current solutions.
We applied FastSA to PyTorch models and verified its effectiveness through popular large vision and text models, including recent language models with the transformer architecture. The results demonstrate significant memory reductions by 73% with extra 18% computational overheads on average. Our experiments demonstrate the practicality and effectiveness of our recomputation algorithm, further highlighting its potential for wide application in various deep learning domains. | A fast heuristic to optimize time-space tradeoff for large models | [
"Akifumi Imanishi",
"Zijian Xu",
"Masayuki Takagi",
"Sixue Wang",
"Emilio Castillo"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fY7dShbtmo | @inproceedings{
shaj2023multi,
title={Multi Time Scale World Models},
author={Vaisakh Shaj and Saleh GHOLAM ZADEH and Ozan Demir and Luiz Ricardo Douat and Gerhard Neumann},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fY7dShbtmo}
} | Intelligent agents use internal world models to reason and make predictions about different courses of their actions at many scales. Devising learning paradigms and architectures that allow machines to learn world models that operate at multiple levels of temporal abstractions while dealing with complex uncertainty predictions is a major technical hurdle. In this work, we propose a probabilistic formalism to learn multi-time scale world models which we call the Multi Time Scale State Space (MTS3) model. Our model uses a computationally efficient inference scheme on multiple time scales for highly accurate long-horizon predictions and uncertainty estimates over several seconds into the future. Our experiments, which focus on action conditional long horizon future predictions, show that MTS3 outperforms recent methods on several system identification benchmarks including complex simulated and real-world dynamical systems. Code is available at this repository:
https://github.com/ALRhub/MTS3. | Multi Time Scale World Models | [
"Vaisakh Shaj",
"Saleh GHOLAM ZADEH",
"Ozan Demir",
"Luiz Ricardo Douat",
"Gerhard Neumann"
] | Conference | spotlight | 2310.18534 | [
"https://github.com/alrhub/mts3"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fX64q0SNfL | @inproceedings{
tsai2023sample,
title={Sample based Explanations via Generalized Representers},
author={Che-Ping Tsai and Chih-Kuan Yeh and Pradeep Kumar Ravikumar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fX64q0SNfL}
} | We propose a general class of sample based explanations of machine learning models, which we term generalized representers. To measure the effect of a training sample on a model's test prediction, generalized representers use two components: a global sample importance that quantifies the importance of the training point to the model and is invariant to test samples, and a local sample importance that measures similarity between the training sample and the test point with a kernel. A key contribution of the paper is to show that generalized representers are the only class of sample based explanations satisfying a natural set of axiomatic properties. We discuss approaches to extract global importances given a kernel, and also natural choices of kernels given modern non-linear models. As we show, many popular existing sample based explanations could be cast as generalized representers with particular choices of kernels and approaches to extract global importances. Additionally, we conduct empirical comparisons of different generalized representers on two image classification datasets. | Sample based Explanations via Generalized Representers | [
"Che-Ping Tsai",
"Chih-Kuan Yeh",
"Pradeep Kumar Ravikumar"
] | Conference | poster | 2310.18526 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fWLf8DV0fI | @inproceedings{
liu2023rethinking,
title={Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules},
author={Zhiyuan Liu and Yaorui Shi and An Zhang and Enzhi Zhang and Kenji Kawaguchi and Xiang Wang and Tat-Seng Chua},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fWLf8DV0fI}
} | Masked graph modeling excels in the self-supervised representation learning of molecular graphs. Scrutinizing previous studies, we can reveal a common scheme consisting of three key components: (1) graph tokenizer, which breaks a molecular graph into smaller fragments (\ie subgraphs) and converts them into tokens; (2) graph masking, which corrupts the graph with masks; (3) graph autoencoder, which first applies an encoder on the masked graph to generate the representations, and then employs a decoder on the representations to recover the tokens of the original graph. However, the previous MGM studies focus extensively on graph masking and encoder, while there is limited understanding of tokenizer and decoder. To bridge the gap, we first summarize popular molecule tokenizers at the granularity of node, edge, motif, and Graph Neural Networks (GNNs), and then examine their roles as the MGM's reconstruction targets. Further, we explore the potential of adopting an expressive decoder in MGM. Our results show that a subgraph-level tokenizer and a sufficiently expressive decoder with remask decoding have a \yuan{large impact on the encoder's representation learning}. Finally, we propose a novel MGM method SimSGT, featuring a Simple GNN-based Tokenizer (SGT) and an effective decoding strategy. We empirically validate that our method outperforms the existing molecule self-supervised learning methods. Our codes and checkpoints are available at https://github.com/syr-cn/SimSGT. | Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules | [
"Zhiyuan Liu",
"Yaorui Shi",
"An Zhang",
"Enzhi Zhang",
"Kenji Kawaguchi",
"Xiang Wang",
"Tat-Seng Chua"
] | Conference | poster | 2310.14753 | [
"https://github.com/syr-cn/simsgt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fW5ZUSVTkv | @inproceedings{
li2023learning,
title={Learning Domain-Aware Detection Head with Prompt Tuning},
author={Haochen Li and Rui Zhang and Hantao Yao and Xinkai Song and Yifan Hao and Yongwei Zhao and Ling Li and Yunji Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fW5ZUSVTkv}
} | Domain adaptive object detection (DAOD) aims to generalize detectors trained on an annotated source domain to an unlabelled target domain.
However, existing methods focus on reducing the domain bias of the detection backbone by inferring a discriminative visual encoder, while ignoring the domain bias in the detection head.
Inspired by the high generalization of vision-language models (VLMs), applying a VLM as the robust detection backbone following a domain-aware detection head is a reasonable way to learn the discriminative detector for each domain, rather than reducing the domain bias in traditional methods.
To achieve the above issue, we thus propose a novel DAOD framework named Domain-Aware detection head with Prompt tuning (DA-Pro), which applies the learnable domain-adaptive prompt to generate the dynamic detection head for each domain.
Formally, the domain-adaptive prompt consists of the domain-invariant tokens, domain-specific tokens, and the domain-related textual description along with the class label.
Furthermore, two constraints between the source and target domains are applied to ensure that the domain-adaptive prompt can capture the domains-shared and domain-specific knowledge.
A prompt ensemble strategy is also proposed to reduce the effect of prompt disturbance.
Comprehensive experiments over multiple cross-domain adaptation tasks demonstrate that using the domain-adaptive prompt can produce an effectively domain-related detection head for boosting domain-adaptive object detection.
Our code is available at https://github.com/Therock90421/DA-Pro. | Learning Domain-Aware Detection Head with Prompt Tuning | [
"Haochen Li",
"Rui Zhang",
"Hantao Yao",
"Xinkai Song",
"Yifan Hao",
"Yongwei Zhao",
"Ling Li",
"Yunji Chen"
] | Conference | poster | 2306.05718 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fUZUoSLXw3 | @inproceedings{
yang2023two,
title={Two Sides of One Coin: the Limits of Untuned {SGD} and the Power of Adaptive Methods},
author={Junchi YANG and Xiang Li and Ilyas Fatkhullin and Niao He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fUZUoSLXw3}
} | The classical analysis of Stochastic Gradient Descent (SGD) with polynomially decaying stepsize $\eta_t = \eta/\sqrt{t}$ relies on well-tuned $\eta$ depending on problem parameters such as Lipschitz smoothness constant, which is often unknown in practice. In this work, we prove that SGD with arbitrary $\eta > 0$, referred to as untuned SGD, still attains an order-optimal convergence rate $\widetilde{\mathcal{O}}(T^{-1/4})$ in terms of gradient norm for minimizing smooth objectives. Unfortunately, it comes at the expense of a catastrophic exponential dependence on the smoothness constant, which we show is unavoidable for this scheme even in the noiseless setting. We then examine three families of adaptive methods — Normalized SGD (NSGD), AMSGrad, and AdaGrad — unveiling their power in preventing such exponential dependency in the absence of information about the smoothness parameter and boundedness of stochastic gradients. Our results provide theoretical justification for the advantage of adaptive methods over untuned SGD in alleviating the issue with large gradients. | Two Sides of One Coin: the Limits of Untuned SGD and the Power of Adaptive Methods | [
"Junchi YANG",
"Xiang Li",
"Ilyas Fatkhullin",
"Niao He"
] | Conference | poster | 2305.12475 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fU9U7OYxfE | @inproceedings{
kolumbus2023asynchronous,
title={Asynchronous Proportional Response Dynamics: Convergence in Markets with Adversarial Scheduling},
author={Yoav Kolumbus and Menahem Levy and Noam Nisan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fU9U7OYxfE}
} | We study Proportional Response Dynamics (PRD) in linear Fisher markets, where participants act asynchronously. We model this scenario as a sequential process in which at each step, an adversary selects a subset of the players to update their bids, subject to liveness constraints. We show that if every bidder individually applies the PRD update rule whenever they are included in the group of bidders selected by the adversary, then, in the generic case, the entire dynamic converges to a competitive equilibrium of the market. Our proof technique reveals additional properties of linear Fisher markets, such as the uniqueness of the market equilibrium for generic parameters and the convergence of associated no swap regret dynamics and best response dynamics under certain conditions. | Asynchronous Proportional Response Dynamics: Convergence in Markets with Adversarial Scheduling | [
"Yoav Kolumbus",
"Menahem Levy",
"Noam Nisan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fTyGT5fulj | @inproceedings{
zhang2023curriculum,
title={Curriculum Learning for Graph Neural Networks: Which Edges Should We Learn First},
author={Zheng Zhang and Junxiang Wang and Liang Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fTyGT5fulj}
} | Graph Neural Networks (GNNs) have achieved great success in representing data with dependencies by recursively propagating and aggregating messages along the edges. However, edges in real-world graphs often have varying degrees of difficulty, and some edges may even be noisy to the downstream tasks. Therefore, existing GNNs may lead to suboptimal learned representations because they usually treat every edge in the graph equally. On the other hand, Curriculum Learning (CL), which mimics the human learning principle of learning data samples in a meaningful order, has been shown to be effective in improving the generalization ability and robustness of representation learners by gradually proceeding from easy to more difficult samples during training. Unfortunately, existing CL strategies are designed for independent data samples and cannot trivially generalize to handle data dependencies. To address these issues, we propose a novel CL strategy to gradually incorporate more edges into training according to their difficulty from easy to hard, where the degree of difficulty is measured by how well the edges are expected given the model training status. We demonstrate the strength of our proposed method in improving the generalization ability and robustness of learned representations through extensive experiments on nine synthetic datasets and nine real-world datasets. The code for our proposed method is available at https://github.com/rollingstonezz/Curriculum_learning_for_GNNs | Curriculum Learning for Graph Neural Networks: Which Edges Should We Learn First | [
"Zheng Zhang",
"Junxiang Wang",
"Liang Zhao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fShubymWrc | @inproceedings{
nichani2023provable,
title={Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks},
author={Eshaan Nichani and Alex Damian and Jason D. Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fShubymWrc}
} | One of the central questions in the theory of deep learning is to understand how neural networks learn hierarchical features. The ability of deep networks to extract salient features is crucial to both their outstanding generalization ability and the modern deep learning paradigm of pretraining and finetuneing. However, this feature learning process remains poorly understood from a theoretical perspective, with existing analyses largely restricted to two-layer networks. In this work we show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks. We analyze the features learned by a three-layer network trained with layer-wise gradient descent, and present a general purpose theorem which upper bounds the sample complexity and width needed to achieve low test error when the target has specific hierarchical structure. We instantiate our framework in specific statistical learning settings -- single-index models and functions of quadratic features -- and show that in the latter setting three-layer networks obtain a sample complexity improvement over all existing guarantees for two-layer networks. Crucially, this sample complexity improvement relies on the ability of three-layer networks to efficiently learn *nonlinear* features. We then establish a concrete optimization-based depth separation by constructing a function which is efficiently learnable via gradient descent on a three-layer network, yet cannot be learned efficiently by a two-layer network. Our work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime. | Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks | [
"Eshaan Nichani",
"Alex Damian",
"Jason D. Lee"
] | Conference | spotlight | 2305.06986 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fPAAgjISu0 | @inproceedings{
cao2023in,
title={In Defense of Softmax Parametrization for Calibrated and Consistent Learning to Defer},
author={Yuzhou Cao and Hussein Mozannar and Lei Feng and Hongxin Wei and Bo An},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fPAAgjISu0}
} | Enabling machine learning classifiers to defer their decision to a downstream expert when the expert is more accurate will ensure improved safety and performance. This objective can be achieved with the learning-to-defer framework which aims to jointly learn how to classify and how to defer to the expert. In recent studies, it has been theoretically shown that popular estimators for learning to defer parameterized with softmax provide unbounded estimates for the likelihood of deferring which makes them uncalibrated. However, it remains unknown whether this is due to the widely used softmax parameterization and if we can find a softmax-based estimator that is both statistically consistent and possesses a valid probability estimator. In this work, we first show that the cause of the miscalibrated and unbounded estimator in prior literature is due to the symmetric nature of the surrogate losses used and not due to softmax. We then propose a novel statistically consistent asymmetric softmax-based surrogate loss that can produce valid estimates without the issue of unboundedness. We further analyze the non-asymptotic properties of our proposed method and empirically validate its performance and calibration on benchmark datasets. | In Defense of Softmax Parametrization for Calibrated and Consistent Learning to Defer | [
"Yuzhou Cao",
"Hussein Mozannar",
"Lei Feng",
"Hongxin Wei",
"Bo An"
] | Conference | poster | 2311.01106 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fKwG6grp8o | @inproceedings{
bordelon2023dynamics,
title={Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks},
author={Blake Bordelon and Cengiz Pehlevan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fKwG6grp8o}
} | We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Starting from a dynamical mean field theory description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the $\mathcal{O}(1/\sqrt{\text{width}})$ fluctuations of the DMFT order parameters over random initializations of the network weights. Our results, while perturbative in width, unlike prior analyses, are non-perturbative in the strength of feature learning. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with a variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final tangent kernel and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the signal-to-noise ratio of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For CNNs trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width. | Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks | [
"Blake Bordelon",
"Cengiz Pehlevan"
] | Conference | spotlight | 2304.03408 | [
"https://github.com/pehlevan-group/dmft_fluctuations"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fKVEMNmWqU | @inproceedings{
ding2023reduced,
title={Reduced Policy Optimization for Continuous Control with Hard Constraints},
author={Shutong Ding and Jingya Wang and Yali Du and Ye Shi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fKVEMNmWqU}
} | Recent advances in constrained reinforcement learning (RL) have endowed reinforcement learning with certain safety guarantees. However, deploying existing constrained RL algorithms in continuous control tasks with general hard constraints remains challenging, particularly in those situations with non-convex hard constraints. Inspired by the generalized reduced gradient (GRG) algorithm, a classical constrained optimization technique, we propose a reduced policy optimization (RPO) algorithm that combines RL with GRG to address general hard constraints. RPO partitions actions into basic actions and nonbasic actions following the GRG method and outputs the basic actions via a policy network. Subsequently, RPO calculates the nonbasic actions by solving equations based on equality constraints using the obtained basic actions. The policy network is then updated by implicitly differentiating nonbasic actions with respect to basic actions. Additionally, we introduce an action projection procedure based on the reduced gradient and apply a modified Lagrangian relaxation technique to ensure inequality constraints are satisfied. To the best of our knowledge, RPO is the first attempt that introduces GRG to RL as a way of efficiently handling both equality and inequality hard constraints. It is worth noting that there is currently a lack of RL environments with complex hard constraints, which motivates us to develop three new benchmarks: two robotics manipulation tasks and a smart grid operation control task. With these benchmarks, RPO achieves better performance than previous constrained RL algorithms in terms of both cumulative reward and constraint violation. We believe RPO, along with the new benchmarks, will open up new opportunities for applying RL to real-world problems with complex constraints. | Reduced Policy Optimization for Continuous Control with Hard Constraints | [
"Shutong Ding",
"Jingya Wang",
"Yali Du",
"Ye Shi"
] | Conference | poster | 2310.09574 | [
"https://github.com/wadx2019/rpo"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fKQEmHoLb6 | @inproceedings{
yu2023learning,
title={Learning Energy-Based Prior Model with Diffusion-Amortized {MCMC}},
author={Peiyu Yu and Yaxuan Zhu and Sirui Xie and Xiaojian Ma and Ruiqi Gao and Song-Chun Zhu and Ying Nian Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fKQEmHoLb6}
} | Latent space EBMs, also known as energy-based priors, have drawn growing interests in the field of generative modeling due to its flexibility in the formulation and strong modeling power of the latent space. However, the common practice of learning latent space EBMs with non-convergent short-run MCMC for prior and posterior sampling is hindering the model from further progress; the degenerate MCMC sampling quality in practice often leads to degraded generation quality and instability in training, especially with highly multi-modal and/or high-dimensional target distributions. To remedy this sampling issue, in this paper we introduce a simple but effective diffusion-based amortization method for long-run MCMC sampling and develop a novel learning algorithm for the latent space EBM based on it. We provide theoretical evidence that the learned amortization of MCMC is a valid long-run MCMC sampler. Experiments on several image modeling benchmark datasets demonstrate the superior performance of our method compared with strong counterparts. | Learning Energy-Based Prior Model with Diffusion-Amortized MCMC | [
"Peiyu Yu",
"Yaxuan Zhu",
"Sirui Xie",
"Xiaojian Ma",
"Ruiqi Gao",
"Song-Chun Zhu",
"Ying Nian Wu"
] | Conference | poster | 2310.03218 | [
"https://github.com/yupeiyu98/diffusion-amortized-mcmc"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fHyLsfMDIs | @inproceedings{
gushchin2023entropic,
title={Entropic Neural Optimal Transport via Diffusion Processes},
author={Nikita Gushchin and Alexander Kolesov and Alexander Korotin and Dmitry P. Vetrov and Evgeny Burnaev},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fHyLsfMDIs}
} | We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between probability distributions which are accessible by samples. Our algorithm is based on the saddle point reformulation of the dynamic version of EOT which is known as the Schrödinger Bridge problem. In contrast to the prior methods for large-scale EOT, our algorithm is end-to-end and consists of a single learning step, has fast inference procedure, and allows handling small values of the entropy regularization coefficient which is of particular importance in some applied problems. Empirically, we show the performance of the method on several large-scale EOT tasks. The code for the ENOT solver can be found at https://github.com/ngushchin/EntropicNeuralOptimalTransport | Entropic Neural Optimal Transport via Diffusion Processes | [
"Nikita Gushchin",
"Alexander Kolesov",
"Alexander Korotin",
"Dmitry P. Vetrov",
"Evgeny Burnaev"
] | Conference | oral | 2211.01156 | [
"https://github.com/ngushchin/entropicneuraloptimaltransport"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fHsBNNDroC | @inproceedings{
haghtalab2023calibrated,
title={Calibrated Stackelberg Games: Learning Optimal Commitments Against Calibrated Agents},
author={Nika Haghtalab and Chara Podimata and Kunhe Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fHsBNNDroC}
} | In this paper, we introduce a generalization of the standard Stackelberg Games (SGs) framework: _Calibrated Stackelberg Games_. In CSGs, a principal repeatedly interacts with an agent who (contrary to standard SGs) does not have direct access to the principal's action but instead best responds to _calibrated forecasts_ about it. CSG is a powerful modeling tool that goes beyond assuming that agents use ad hoc and highly specified algorithms for interacting in strategic settings to infer the principal's actions and thus more robustly addresses real-life applications that SGs were originally intended to capture. Along with CSGs, we also introduce a stronger notion of calibration, termed _adaptive calibration_, that provides fine-grained any-time calibration guarantees against adversarial sequences. We give a general approach for obtaining adaptive calibration algorithms and specialize them for finite CSGs. In our main technical result, we show that in CSGs, the principal can achieve utility that converges to the optimum Stackelberg value of the game both in _finite_ and _continuous_ settings and that no higher utility is achievable. Two prominent and immediate applications of our results are the settings of learning in Stackelberg Security Games and strategic classification, both against _calibrated_ agents. | Calibrated Stackelberg Games: Learning Optimal Commitments Against Calibrated Agents | [
"Nika Haghtalab",
"Chara Podimata",
"Kunhe Yang"
] | Conference | spotlight | 2306.02704 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fFJThJ94rY | @inproceedings{
lee2023switching,
title={Switching Autoregressive Low-rank Tensor Models},
author={Hyun Dong Lee and Andrew Warrington and Joshua I Glaser and Scott Linderman},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fFJThJ94rY}
} | An important problem in time-series analysis is modeling systems with time-varying dynamics. Probabilistic models with joint continuous and discrete latent states offer interpretable, efficient, and experimentally useful descriptions of such data. Commonly used models include autoregressive hidden Markov models (ARHMMs) and switching linear dynamical systems (SLDSs), each with its own advantages and disadvantages. ARHMMs permit exact inference and easy parameter estimation, but are parameter intensive when modeling long dependencies, and hence are prone to overfitting. In contrast, SLDSs can capture long-range dependencies in a parameter efficient way through Markovian latent dynamics, but present an intractable likelihood and a challenging parameter estimation task. In this paper, we propose _switching autoregressive low-rank tensor_ SALT models, which retain the advantages of both approaches while ameliorating the weaknesses. SALT parameterizes the tensor of an ARHMM with a low-rank factorization to control the number of parameters and allow longer range dependencies without overfitting. We prove theoretical and discuss practical connections between SALT, linear dynamical systems, and SLDSs. We empirically demonstrate quantitative advantages of SALT models on a range of simulated and real prediction tasks, including behavioral and neural datasets. Furthermore, the learned low-rank tensor provides novel insights into temporal dependencies within each discrete state. | Switching Autoregressive Low-rank Tensor Models | [
"Hyun Dong Lee",
"Andrew Warrington",
"Joshua I Glaser",
"Scott Linderman"
] | Conference | poster | 2306.03291 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fAdMly4ki5 | @inproceedings{
he2023diffusion,
title={Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning},
author={Haoran He and Chenjia Bai and Kang Xu and Zhuoran Yang and Weinan Zhang and Dong Wang and Bin Zhao and Xuelong Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fAdMly4ki5}
} | Diffusion models have demonstrated highly-expressive generative capabilities in vision and NLP. Recent studies in reinforcement learning (RL) have shown that diffusion models are also powerful in modeling complex policies or trajectories in offline datasets. However, these works have been limited to single-task settings where a generalist agent capable of addressing multi-task predicaments is absent. In this paper, we aim to investigate the effectiveness of a single diffusion model in modeling large-scale multi-task offline data, which can be challenging due to diverse and multimodal data distribution. Specifically, we propose Multi-Task Diffusion Model (\textsc{MTDiff}), a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis in multi-task offline settings. \textsc{MTDiff} leverages vast amounts of knowledge available in multi-task data and performs implicit knowledge sharing among tasks. For generative planning, we find \textsc{MTDiff} outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D. For data synthesis, \textsc{MTDiff} generates high-quality data for testing tasks given a single demonstration as a prompt, which enhances the low-quality datasets for even unseen tasks. | Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning | [
"Haoran He",
"Chenjia Bai",
"Kang Xu",
"Zhuoran Yang",
"Weinan Zhang",
"Dong Wang",
"Bin Zhao",
"Xuelong Li"
] | Conference | poster | 2305.18459 | [
""
] | https://huggingface.co/papers/2305.18459 | 1 | 0 | 0 | 8 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=f8zIs2IB6Q | @inproceedings{
tang2023sequential,
title={Sequential Memory with Temporal Predictive Coding},
author={Mufeng Tang and Helen Barron and Rafal Bogacz},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f8zIs2IB6Q}
} | Forming accurate memory of sequential stimuli is a fundamental function of biological agents. However, the computational mechanism underlying sequential memory in the brain remains unclear. Inspired by neuroscience theories and recent successes in applying predictive coding (PC) to \emph{static} memory tasks, in this work we propose a novel PC-based model for \emph{sequential} memory, called \emph{temporal predictive coding} (tPC). We show that our tPC models can memorize and retrieve sequential inputs accurately with a biologically plausible neural implementation. Importantly, our analytical study reveals that tPC can be viewed as a classical Asymmetric Hopfield Network (AHN) with an implicit statistical whitening process, which leads to more stable performance in sequential memory tasks of structured inputs. Moreover, we find that tPC exhibits properties consistent with behavioral observations and theories in neuroscience, thereby strengthening its biological relevance. Our work establishes a possible computational mechanism underlying sequential memory in the brain that can also be theoretically interpreted using existing memory model frameworks. | Sequential Memory with Temporal Predictive Coding | [
"Mufeng Tang",
"Helen Barron",
"Rafal Bogacz"
] | Conference | poster | 2305.11982 | [
"https://github.com/c16mftang/sequential-memory"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=f7wFwPJwBe | @inproceedings{
luo2023learning,
title={Learning Re-sampling Methods with Parameter Attribution for Image Super-resolution},
author={Xiaotong Luo and Yuan Xie and Yanyun Qu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f7wFwPJwBe}
} | Single image super-resolution (SISR) has made a significant breakthrough benefiting from the prevalent rise of deep neural networks and large-scale training samples. The mainstream deep SR models primarily focus on network architecture design as well as optimization schemes, while few pay attention to the training data. In fact, most of the existing SR methods train the model on uniformly sampled patch pairs from the whole image. However, the uneven image content makes the training data present an unbalanced distribution, i.e., the easily reconstructed region (smooth) occupies the majority of the data, while the hard reconstructed region (edge or texture) has rarely few samples. Based on this phenomenon, we consider rethinking the current paradigm of merely using uniform data sampling way for training SR models. In this paper, we propose a simple yet effective Bi-Sampling Parameter Attribution (BSPA) method for accurate image SR. Specifically, the bi-sampling consists of uniform sampling and inverse sampling, which is introduced to reconcile the unbalanced inherent data bias. The former aims to keep the intrinsic data distribution, and the latter is designed to enhance the feature extraction ability of the model on the hard samples. Moreover, integrated gradient is introduced to attribute the contribution of each parameter in the alternate models trained by both sampling data so as to filter the trivial parameters for further dynamic refinement. By progressively decoupling the allocation of parameters, the SR model can learn a more compact representation. Extensive experiments on publicly available datasets demonstrate that our proposal can effectively boost the performance of baseline methods from the data re-sampling view. | Learning Re-sampling Methods with Parameter Attribution for Image Super-resolution | [
"Xiaotong Luo",
"Yuan Xie",
"Yanyun Qu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=f71xXsoG1v | @inproceedings{
domke2023provable,
title={Provable convergence guarantees for black-box variational inference},
author={Justin Domke and Robert M. Gower and Guillaume Garrigos},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f71xXsoG1v}
} | Black-box variational inference is widely used in situations where there is no proof that its stochastic optimization succeeds. We suggest this is due to a theoretical gap in existing stochastic optimization proofs—namely the challenge of gradient estimators with unusual noise bounds, and a composite non-smooth objective. For dense Gaussian variational families, we observe that existing gradient estimators based on reparameterization satisfy a quadratic noise bound and give novel convergence guarantees for proximal and projected stochastic gradient descent using this bound. This provides rigorous guarantees that methods similar to those used in practice converge on realistic inference problems. | Provable convergence guarantees for black-box variational inference | [
"Justin Domke",
"Robert M. Gower",
"Guillaume Garrigos"
] | Conference | poster | 2306.03638 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=f6rQJ83ycb | @inproceedings{
luo2023reward,
title={Reward Finetuning for Faster and More Accurate Unsupervised Object Discovery},
author={Katie Z Luo and Zhenzhen Liu and Xiangyu Chen and Yurong You and Sagie Benaim and Cheng Perng Phoo and Mark Campbell and Wen Sun and Bharath Hariharan and Kilian Q Weinberger},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f6rQJ83ycb}
} | Recent advances in machine learning have shown that Reinforcement Learning from Human Feedback (RLHF) can improve machine learning models and align them with human preferences. Although very successful for Large Language Models (LLMs), these advancements have not had a comparable impact in research for autonomous vehicles—where alignment with human expectations can be imperative. In this paper, we propose to adapt similar RL-based methods to unsupervised object discovery, i.e. learning to detect objects from LiDAR points without any training labels. Instead of labels, we use simple heuristics to mimic human feedback. More explicitly, we combine multiple heuristics into a simple reward function that positively correlates its score with bounding box accuracy, i.e., boxes containing objects are scored higher than those without. We start from the detector’s own predictions to explore the space and reinforce boxes with high rewards through gradient updates. Empirically, we demonstrate that our approach is not only more accurate, but also orders of magnitudes faster to train compared to prior works on object discovery. Code is available at https://github.com/katieluo88/DRIFT. | Reward Finetuning for Faster and More Accurate Unsupervised Object Discovery | [
"Katie Z Luo",
"Zhenzhen Liu",
"Xiangyu Chen",
"Yurong You",
"Sagie Benaim",
"Cheng Perng Phoo",
"Mark Campbell",
"Wen Sun",
"Bharath Hariharan",
"Kilian Q Weinberger"
] | Conference | poster | 2310.19080 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=f6a9XVFYIo | @inproceedings{
xue2023sasolver,
title={{SA}-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models},
author={Shuchen Xue and Mingyang Yi and Weijian Luo and Shifeng Zhang and Jiacheng Sun and Zhenguo Li and Zhi-Ming Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f6a9XVFYIo}
} | Diffusion Probabilistic Models (DPMs) have achieved considerable success in generation tasks. As sampling from DPMs is equivalent to solving diffusion SDE or ODE which is time-consuming, numerous fast sampling methods built upon improved differential equation solvers are proposed. The majority of such techniques consider solving the diffusion ODE due to its superior efficiency. However, stochastic sampling could offer additional advantages in generating diverse and high-quality data. In this work, we engage in a comprehensive analysis of stochastic sampling from two aspects: variance-controlled diffusion SDE and linear multi-step SDE solver. Based on our analysis, we propose SA-Solver, which is an improved efficient stochastic Adams method for solving diffusion SDE to generate data with high quality. Our experiments show that SA-Solver achieves: 1) improved or comparable performance compared with the existing state-of-the-art (SOTA) sampling methods for few-step sampling; 2) SOTA FID on substantial benchmark datasets under a suitable number of function evaluations (NFEs). | SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models | [
"Shuchen Xue",
"Mingyang Yi",
"Weijian Luo",
"Shifeng Zhang",
"Jiacheng Sun",
"Zhenguo Li",
"Zhi-Ming Ma"
] | Conference | poster | 2309.05019 | [
"https://github.com/scxue/SA-Solver"
] | https://huggingface.co/papers/2309.05019 | 0 | 1 | 0 | 7 | 1 | [
"PixArt-alpha/PixArt-XL-2-1024-MS",
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
"PixArt-alpha/PixArt-XL-2-512x512",
"PixArt-alpha/PixArt-Sigma-XL-2-512-MS",
"PixArt-alpha/PixArt-XL-2-256x256",
"AlanB/SigmaJourney-1024ms"
] | [] | [
"PixArt-alpha/PixArt-alpha",
"PixArt-alpha/PixArt-Sigma",
"TIGER-Lab/GenAI-Arena",
"artificialguybr/Pixart-Sigma",
"dataautogpt3/PixArt-Sigma-900M",
"LanguageBind/Open-Sora-Plan-v1.0.0",
"alibaba-pai/EasyAnimate",
"jasperai/flash-diffusion",
"LanguageBind/Open-Sora-Plan-v1.1.0",
"Nymbo/image_gen_supaqueue",
"maxin-cn/Latte-1",
"fffiloni/flash-wallpapers",
"diffusers/compute-pipeline-size",
"priyanshu9588/PixArt-alpha",
"ptx0/PixArt-900M-EDiffi",
"sidd-genmo/Open-Sora-Plan-v1.0.0",
"jarnot/EasyAnimate",
"ijohn07/flash-diffusion",
"PeepDaSlan9/HYDRAS_Latte-1",
"Jyothirmai782/Pixart-Sigma",
"Viswanath999/Pixart-Sigma",
"Xtenda/PixArt-alpha-PixArt-Sigma-XL-2-1024-MS",
"jalve/jalvneis",
"dd890/PixArt-alpha-PixArt-XL-2-1024-MS",
"jalve/NeisAlv",
"MrOvkill/PixArt-alpha-moddedalltohell",
"YanzBotz/PixArt",
"abhijitgayen/PixArt-alpha-PixArt-XL-2-1024-MS",
"wandb/reproducible-pixart-alpha",
"vaikl/PixArt-alpha-PixArt-XL-2-1024-MS",
"Taf2023/Open-Sora-Plan-v1.0.0",
"lylosn/Open-Sora-Plan-v1.0.0",
"tsi-org/PixArt-alpha",
"lcyyyy/homework_end",
"tsi-org/PixioArt-alpha",
"yufiofficial/PixArt-alpha-PixArt-XL-2-1024-MS",
"CPM1234567890/ex01",
"Lucas94/PixArt-alpha-PixArt-XL-2-1024-MS",
"RO-Rtechs/Rtechs_Open-Sora-Plan-v1.1.0",
"cocktailpeanut/flash-diffusion",
"BobLLM/Sora",
"kletoskletos/PixArt-alpha-PixArt-XL-2-1024-MS",
"Nymbo/flash-wallpapers",
"indielikesai/PixArt-alpha-PixArt-XL-2-256x256"
] |
null | https://openreview.net/forum?id=f56xMRb7Vt | @inproceedings{
samuel2023normguided,
title={Norm-guided latent space exploration for text-to-image generation},
author={Dvir Samuel and Rami Ben-Ari and Nir Darshan and Haggai Maron and Gal Chechik},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f56xMRb7Vt}
} | Text-to-image diffusion models show great potential in synthesizing a large variety of concepts in new compositions and scenarios. However, the latent space of initial seeds is still not well understood and its structure was shown to impact the generation of various concepts. Specifically, simple operations like interpolation and finding the centroid of a set of seeds perform poorly when using standard Euclidean or spherical metrics in the latent space. This paper makes the observation that, in current training procedures, diffusion models observed inputs with a narrow range of norm values. This has strong implications for methods that rely on seed manipulation for image generation, with applications to few-shot and long-tail learning tasks. To address this issue, we propose a novel method for interpolating between two seeds and demonstrate that it defines a new non-Euclidean metric that takes into account a norm-based prior on seeds. We describe a simple yet efficient algorithm for approximating this interpolation procedure and use it to further define centroids in the latent seed space. We show that our new interpolation and centroid techniques significantly enhance the generation of rare concept images. This further leads to state-of-the-art performance on few-shot and long-tail benchmarks, improving prior approaches in terms of generation speed, image quality, and semantic content. | Norm-guided latent space exploration for text-to-image generation | [
"Dvir Samuel",
"Rami Ben-Ari",
"Nir Darshan",
"Haggai Maron",
"Gal Chechik"
] | Conference | poster | 2306.08687 | [
"https://github.com/dvirsamuel/SeedSelect"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=f3JNQd7CHM | @inproceedings{
wies2023the,
title={The Learnability of In-Context Learning},
author={Noam Wies and Yoav Levine and Amnon Shashua},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f3JNQd7CHM}
} | In-context learning is a surprising and important phenomenon that emerged when modern language models were scaled to billions of learned parameters.
Without modifying a large language model's weights, it can be tuned to perform various downstream natural language tasks simply by including concatenated training examples of these tasks in its input.
Though disruptive for many practical applications of large language models, this emergent learning paradigm is not well understood from a theoretical perspective. In this paper, we propose a first-of-its-kind PAC based framework for in-context learnability, and use it to provide the first finite sample complexity results for the in-context learning setup.
Our framework includes an initial pretraining phase, which fits a function to the pretraining distribution, and then a second in-context learning phase, which keeps this function constant and concatenates training examples of the downstream task in its input.
We use our framework in order to prove that, under mild assumptions, when the pretraining distribution is a mixture of latent tasks (a model often considered for natural language pretraining), these tasks can be efficiently learned via in-context learning, even though the model's weights are unchanged and the input significantly diverges from the pretraining distribution.
Our theoretical analysis reveals that in this setting, in-context learning is more about identifying the task than about learning it, a result which is in line with a series of recent empirical findings.
We hope that the in-context learnability framework presented in this paper will facilitate future progress towards a deeper understanding of this important new learning paradigm. | The Learnability of In-Context Learning | [
"Noam Wies",
"Yoav Levine",
"Amnon Shashua"
] | Conference | poster | 2303.07895 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=f39Q3JyoIi | @inproceedings{
khani2023collaborative,
title={Collaborative Alignment of {NLP} Models},
author={Fereshte Khani and Marco Tulio Ribeiro},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f39Q3JyoIi}
} | Despite substantial advancements, Natural Language Processing (NLP) models often require post-training adjustments to enforce business rules, rectify undesired behavior, and align with user values.
These adjustments involve operationalizing "concepts"—dictating desired model responses to certain inputs.
However, it's difficult for a single entity to enumerate and define all possible concepts, indicating a need for a multi-user, collaborative model alignment framework.
Moreover, the exhaustive delineation of a concept is challenging, and an improper approach can create shortcuts or interfere with original data or other concepts.
To address these challenges, we introduce CoAlign, a framework that enables multi-user interaction with the model, thereby mitigating individual limitations.
CoAlign aids users in operationalizing their concepts using Large Language Models, and relying on the principle that NLP models exhibit simpler behaviors in local regions.
Our main insight is learning a \emph{local} model for each concept, and a \emph{global} model to integrate the original data with all concepts.
We then steer a large language model to generate instances within concept boundaries where local and global disagree.
Our experiments show CoAlign is effective at helping multiple users operationalize concepts and avoid interference for a variety of scenarios, tasks, and models. | Collaborative Alignment of NLP Models | [
"Fereshte Khani",
"Marco Tulio Ribeiro"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=f38EY21lBw | @inproceedings{
steinke2023privacy,
title={Privacy Auditing with One (1) Training Run},
author={Thomas Steinke and Milad Nasr and Matthew Jagielski},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f38EY21lBw}
} | We propose a scheme for auditing differentially private machine learning systems with a single training run. This exploits the parallelism of being able to add or remove multiple training examples independently. We analyze this using the connection between differential privacy and statistical generalization, which avoids the cost of group privacy. Our auditing scheme requires minimal assumptions about the algorithm and can be applied in the black-box or white-box setting. We demonstrate the effectiveness of our framework by applying it to DP-SGD, where we can achieve meaningful empirical privacy lower bounds by training only one model. In contrast, standard methods would require training hundreds of models. | Privacy Auditing with One (1) Training Run | [
"Thomas Steinke",
"Milad Nasr",
"Matthew Jagielski"
] | Conference | oral | 2305.08846 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=f2U4HCY8bg | @inproceedings{
ganai2023iterative,
title={Iterative Reachability Estimation for Safe Reinforcement Learning},
author={Milan Ganai and Zheng Gong and Chenning Yu and Sylvia Lee Herbert and Sicun Gao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f2U4HCY8bg}
} | Ensuring safety is important for the practical deployment of reinforcement learning (RL). Various challenges must be addressed, such as handling stochasticity in the environments, providing rigorous guarantees of persistent state-wise safety satisfaction, and avoiding overly conservative behaviors that sacrifice performance. We propose a new framework, Reachability Estimation for Safe Policy Optimization (RESPO), for safety-constrained RL in general stochastic settings. In the feasible set where there exist violation-free policies, we optimize for rewards while maintaining persistent safety. Outside this feasible set, our optimization produces the safest behavior by guaranteeing entrance into the feasible set whenever possible with the least cumulative discounted violations. We introduce a class of algorithms using our novel reachability estimation function to optimize in our proposed framework and in similar frameworks such as those concurrently handling multiple hard and soft constraints. We theoretically establish that our algorithms almost surely converge to locally optimal policies of our safe optimization framework. We evaluate the proposed methods on a diverse suite of safe RL environments from Safety Gym, PyBullet, and MuJoCo, and show the benefits in improving both reward performance and safety compared with state-of-the-art baselines. | Iterative Reachability Estimation for Safe Reinforcement Learning | [
"Milan Ganai",
"Zheng Gong",
"Chenning Yu",
"Sylvia Lee Herbert",
"Sicun Gao"
] | Conference | poster | 2309.13528 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=f0Jj3C3Pnp | @inproceedings{
du2023hubrouter,
title={HubRouter: Learning Global Routing via Hub Generation and Pin-hub Connection},
author={Xingbo Du and Chonghua Wang and Ruizhe Zhong and Junchi Yan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=f0Jj3C3Pnp}
} | Global Routing (GR) is a core yet time-consuming task in VLSI systems. It recently attracted efforts from the machine learning community, especially generative models, but they suffer from the non-connectivity of generated routes. We argue that the inherent non-connectivity can harm the advantage of its one-shot generation and has to be post-processed by traditional approaches. Thus, we propose a novel definition, called hub, which represents the key point in the route. Equipped with hubs, global routing is transferred from a pin-pin connection problem to a hub-pin connection problem. Specifically, to generate definitely-connected routes, this paper proposes a two-phase learning scheme named HubRouter, which includes 1) hub-generation phase: A condition-guided hub generator using deep generative models; 2) pin-hub-connection phase: An RSMT construction module that connects the hubs and pins using an actor-critic model. In the first phase, we incorporate typical generative models into a multi-task learning framework to perform hub generation and address the impact of sensitive noise points with stripe mask learning. During the second phase, HubRouter employs an actor-critic model to finish the routing, which is efficient and has very slight errors. Experiments on simulated and real-world global routing benchmarks are performed to show our approach's efficiency, particularly HubRouter outperforms the state-of-the-art generative global routing methods in wirelength, overflow, and running time. Moreover, HubRouter also shows strength in other applications, such as RSMT construction and interactive path replanning. | HubRouter: Learning Global Routing via Hub Generation and Pin-hub Connection | [
"Xingbo Du",
"Chonghua Wang",
"Ruizhe Zhong",
"Junchi Yan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ezqI5WgGvY | @inproceedings{
fuller2023croma,
title={{CROMA}: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders},
author={Anthony Fuller and Koreen Millard and James R Green},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ezqI5WgGvY}
} | A vital and rapidly growing application, remote sensing offers vast yet sparsely labeled, spatially aligned multimodal data; this makes self-supervised learning algorithms invaluable. We present CROMA: a framework that combines contrastive and reconstruction self-supervised objectives to learn rich unimodal and multimodal representations. Our method separately encodes masked-out multispectral optical and synthetic aperture radar samples—aligned in space and time—and performs cross-modal contrastive learning. Another encoder fuses these sensors, producing joint multimodal encodings that are used to predict the masked patches via a lightweight decoder. We show that these objectives are complementary when leveraged on spatially aligned multimodal data. We also introduce X- and 2D-ALiBi, which spatially biases our cross- and self-attention matrices. These strategies improve representations and allow our models to effectively extrapolate to images up to $17.6\times$ larger at test-time. CROMA outperforms the current SoTA multispectral model, evaluated on: four classification benchmarks—finetuning (avg.$\uparrow$ 1.8%), linear (avg.$\uparrow$ 2.4%) and nonlinear (avg.$\uparrow$ 1.4%) probing, $k$NN classification (avg.$\uparrow$ 3.5%), and $K$-means clustering (avg.$\uparrow$ 8.4%); and three segmentation benchmarks (avg.$\uparrow$ 6.4%). CROMA’s rich, optionally multimodal representations can be widely leveraged across remote sensing applications. | CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders | [
"Anthony Fuller",
"Koreen Millard",
"James R Green"
] | Conference | poster | 2311.00566 | [
"https://github.com/antofuller/croma"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ezCsMOy1w9 | @inproceedings{
zheng2023texttttaco,
title={\${\textbackslash}texttt\{{TACO}\}\$: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning},
author={Ruijie Zheng and Xiyao Wang and Yanchao Sun and Shuang Ma and Jieyu Zhao and Huazhe Xu and Hal Daum{\'e} III and Furong Huang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ezCsMOy1w9}
} | Despite recent progress in reinforcement learning (RL) from raw pixel data, sample inefficiency continues to present a substantial obstacle.
Prior works have attempted to address this challenge by creating self-supervised auxiliary tasks, aiming to enrich the agent's learned representations with control-relevant information for future state prediction.
However, these objectives are often insufficient to learn representations that can represent the optimal policy or value function, and they often consider tasks with small, abstract discrete action spaces and thus overlook the importance of action representation learning in continuous control.
In this paper, we introduce $\texttt{TACO}$: $\textbf{T}$emporal $\textbf{A}$ction-driven $\textbf{CO}$ntrastive Learning, a simple yet powerful temporal contrastive learning approach that facilitates the concurrent acquisition of latent state and action representations for agents.
$\texttt{TACO}$ simultaneously learns a state and an action representation by optimizing the mutual information between representations of current states paired with action sequences and representations of the corresponding future states.
Theoretically, $\texttt{TACO}$ can be shown to learn state and action representations that encompass sufficient information for control, thereby improving sample efficiency.
For online RL, $\texttt{TACO}$ achieves 40% performance boost after one million environment interaction steps on average across nine challenging visual continuous control tasks from Deepmind Control Suite.
In addition, we show that $\texttt{TACO}$ can also serve as a plug-and-play module adding to existing offline visual RL methods to establish the new state-of-the-art performance for offline visual RL across offline datasets with varying quality. | : Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning | [
"Ruijie Zheng",
"Xiyao Wang",
"Yanchao Sun",
"Shuang Ma",
"Jieyu Zhao",
"Huazhe Xu",
"Hal Daumé III",
"Furong Huang"
] | Conference | poster | [
"https://github.com/frankzheng2022/taco"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ez6Cb0ZGzG | @inproceedings{
suhr2023continual,
title={Continual Learning for Instruction Following from Realtime Feedback},
author={Alane Suhr and Yoav Artzi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ez6Cb0ZGzG}
} | We propose and deploy an approach to continually train an instruction-following agent from feedback provided by users during collaborative interactions. During interaction, human users instruct an agent using natural language, and provide realtime binary feedback as they observe the agent following their instructions. We design a contextual bandit learning approach, converting user feedback to immediate reward. We evaluate through thousands of human-agent interactions, demonstrating 15.4% absolute improvement in instruction execution accuracy over time. We also show our approach is robust to several design variations, and that the feedback signal is roughly equivalent to the learning signal of supervised demonstration data. | Continual Learning for Instruction Following from Realtime Feedback | [
"Alane Suhr",
"Yoav Artzi"
] | Conference | spotlight | 2212.09710 | [
"https://github.com/lil-lab/clif_cb"
] | https://huggingface.co/papers/2212.09710 | 1 | 0 | 1 | 2 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=exiXmAfuDK | @inproceedings{
mazzetto2023an,
title={An Adaptive Algorithm for Learning with Unknown Distribution Drift},
author={Alessio Mazzetto and Eli Upfal},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=exiXmAfuDK}
} | We develop and analyze a general technique for learning with an unknown distribution drift. Given a sequence of independent observations from the last $T$ steps of a drifting distribution, our algorithm agnostically learns a family of functions with respect to the current distribution at time $T$. Unlike previous work, our technique does not require prior knowledge about the magnitude of the drift. Instead, the algorithm adapts to the sample data. Without explicitly estimating the drift, the algorithm learns a family of functions with almost the same error as a learning algorithm that knows the magnitude of the drift in advance. Furthermore, since our algorithm adapts to the data, it can guarantee a better learning error than an algorithm that relies on loose bounds on the drift. We demonstrate the application of our technique in two fundamental learning scenarios: binary classification and linear regression. | An Adaptive Algorithm for Learning with Unknown Distribution Drift | [
"Alessio Mazzetto",
"Eli Upfal"
] | Conference | poster | 2305.02252 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=exg62lfHrB | @inproceedings{
zhang2023model,
title={Model Spider: Learning to Rank Pre-Trained Models Efficiently},
author={Yi-Kai Zhang and Ting-Ji Huang and Yao-Xiang Ding and De-Chuan Zhan and Han-Jia Ye},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=exg62lfHrB}
} | Figuring out which Pre-Trained Model (PTM) from a model zoo fits the target task is essential to take advantage of plentiful model resources. With the availability of numerous heterogeneous PTMs from diverse fields, efficiently selecting the most suitable one is challenging due to the time-consuming costs of carrying out forward or backward passes over all PTMs. In this paper, we propose Model Spider, which tokenizes both PTMs and tasks by summarizing their characteristics into vectors to enable efficient PTM selection. By leveraging the approximated performance of PTMs on a separate set of training tasks, Model Spider learns to construct representation and measure the fitness score between a model-task pair via their representation. The ability to rank relevant PTMs higher than others generalizes to new tasks. With the top-ranked PTM candidates, we further learn to enrich task repr. with their PTM-specific semantics to re-rank the PTMs for better selection. Model Spider balances efficiency and selection ability, making PTM selection like a spider preying on a web. Model Spider exhibits promising performance across diverse model zoos, including visual models and Large Language Models (LLMs). Code is available at https://github.com/zhangyikaii/Model-Spider. | Model Spider: Learning to Rank Pre-Trained Models Efficiently | [
"Yi-Kai Zhang",
"Ting-Ji Huang",
"Yao-Xiang Ding",
"De-Chuan Zhan",
"Han-Jia Ye"
] | Conference | spotlight | 2306.03900 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=exPzwOhBgx | @inproceedings{
liu2023learning,
title={Learning Dictionary for Visual Attention},
author={Yingjie Liu and Xuan Liu and Hui Yu and XUAN TANG and Xian Wei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=exPzwOhBgx}
} | Recently, the attention mechanism has shown outstanding competence in capturing global structure information and long-range relationships within data, thus enhancing the performance of deep vision models on various computer vision tasks. In this work, we propose a novel dictionary learning-based attention (\textit{Dic-Attn}) module, which models this issue as a decomposition and reconstruction problem with the sparsity prior, inspired by sparse coding in the human visual perception system. The proposed \textit{Dic-Attn} module decomposes the input into a dictionary and corresponding sparse representations, allowing for the disentanglement of underlying nonlinear structural information in visual data and the reconstruction of an attention embedding. By applying transformation operations in the spatial and channel domains, the module dynamically selects the dictionary's atoms and sparse representations. Finally, the updated dictionary and sparse representations capture the global contextual information and reconstruct the attention maps. The proposed \textit{Dic-Attn} module is designed with plug-and-play compatibility, allowing for integration into deep attention encoders. Our approach offers an intuitive and elegant means to exploit the discriminative information from data, promoting visual attention construction. Extensive experimental results on various computer vision tasks, e.g., image and point cloud classification, validate that our method achieves promising performance, and shows a strong competitive comparison with state-of-the-art attention methods. | Learning Dictionary for Visual Attention | [
"Yingjie Liu",
"Xuan Liu",
"Hui Yu",
"XUAN TANG",
"Xian Wei"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=exGOXqxR0L | @inproceedings{
roberts2023geometryaware,
title={Geometry-Aware Adaptation for Pretrained Models},
author={Nicholas Roberts and Xintong Li and Dyah Adila and Sonia Cromp and Tzu-Heng Huang and Jitian Zhao and Frederic Sala},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=exGOXqxR0L}
} | Machine learning models---including prominent zero-shot models---are often trained on datasets whose labels are only a small proportion of a larger label space. Such spaces are commonly equipped with a metric that relates the labels via distances between them. We propose a simple approach to exploit this information to adapt the trained model to reliably predict new classes---or, in the case of zero-shot prediction, to improve its performance---without any additional training. Our technique is a drop-in replacement of the standard prediction rule, swapping $\text{argmax}$ with the Fréchet mean. We provide a comprehensive theoretical analysis for this approach, studying (i) learning-theoretic results trading off label space diameter, sample complexity, and model dimension, (ii) characterizations of the full range of scenarios in which it is possible to predict any unobserved class, and (iii) an optimal active learning-like next class selection procedure to obtain optimal training classes for when it is not possible to predict the entire range of unobserved classes. Empirically, using easily-available external metrics, our proposed approach, Loki, gains up to 29.7% relative improvement over SimCLR on ImageNet and scales to hundreds of thousands of classes. When no such metric is available, Loki can use self-derived metrics from class embeddings and obtains a 10.5% improvement on pretrained zero-shot models such as CLIP. | Geometry-Aware Adaptation for Pretrained Models | [
"Nicholas Roberts",
"Xintong Li",
"Dyah Adila",
"Sonia Cromp",
"Tzu-Heng Huang",
"Jitian Zhao",
"Frederic Sala"
] | Conference | poster | 2307.12226 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=etd0ebzGOG | @inproceedings{
qi2023vpp,
title={{VPP}: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation},
author={Zekun Qi and Muzhou Yu and Runpei Dong and Kaisheng Ma},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=etd0ebzGOG}
} | Conditional 3D generation is undergoing a significant advancement, enabling the free creation of 3D content from inputs such as text or 2D images. However, previous approaches have suffered from low inference efficiency, limited generation categories, and restricted downstream applications. In this work, we revisit the impact of different 3D representations on generation quality and efficiency. We propose a progressive generation method through Voxel-Point Progressive Representation (VPP). VPP leverages structured voxel representation in the proposed Voxel Semantic Generator and the sparsity of unstructured point representation in the Point Upsampler, enabling efficient generation of multi-category objects. VPP can generate high-quality 8K point clouds within 0.2 seconds. Additionally, the masked generation Transformer allows for various 3D downstream tasks, such as generation, editing, completion, and pre-training. Extensive experiments demonstrate that VPP efficiently generates high-fidelity and diverse 3D shapes across different categories, while also exhibiting excellent representation transfer performance. Codes will be released at https://github.com/qizekun/VPP. | VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation | [
"Zekun Qi",
"Muzhou Yu",
"Runpei Dong",
"Kaisheng Ma"
] | Conference | poster | 2307.16605 | [
"https://github.com/qizekun/vpp"
] | https://huggingface.co/papers/2307.16605 | 2 | 0 | 0 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=etYk6TeO2q | @inproceedings{
liu2023causal,
title={Causal Discovery from Subsampled Time Series with Proxy Variables},
author={Mingzhou Liu and Xinwei Sun and Lingjing Hu and Yizhou Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=etYk6TeO2q}
} | Inferring causal structures from time series data is the central interest of many scientific inquiries. A major barrier to such inference is the problem of subsampling, *i.e.*, the frequency of measurement is much lower than that of causal influence. To overcome this problem, numerous methods have been proposed, yet either was limited to the linear case or failed to achieve identifiability. In this paper, we propose a constraint-based algorithm that can identify the entire causal structure from subsampled time series, without any parametric constraint. Our observation is that the challenge of subsampling arises mainly from hidden variables at the unobserved time steps. Meanwhile, every hidden variable has an observed proxy, which is essentially itself at some observable time in the future, benefiting from the temporal structure. Based on these, we can leverage the proxies to remove the bias induced by the hidden variables and hence achieve identifiability. Following this intuition, we propose a proxy-based causal discovery algorithm. Our algorithm is nonparametric and can achieve full causal identification. Theoretical advantages are reflected in synthetic and real-world experiments. | Causal Discovery from Subsampled Time Series with Proxy Variables | [
"Mingzhou Liu",
"Xinwei Sun",
"Lingjing Hu",
"Yizhou Wang"
] | Conference | poster | 2305.05276 | [
"https://github.com/lmz123321/proxy_causal_discovery"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=esy7pkZmKn | @inproceedings{
zhu2023doublyrobust,
title={Doubly-Robust Self-Training},
author={Banghua Zhu and Mingyu Ding and Philip Jacobson and Ming Wu and Wei Zhan and Michael Jordan and Jiantao Jiao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=esy7pkZmKn}
} | Self-training is a well-established technique in semi-supervised learning, which leverages unlabeled data by generating pseudo-labels and incorporating them with a limited labeled dataset for training. The effectiveness of self-training heavily relies on the accuracy of these pseudo-labels. In this paper, we introduce doubly-robust self-training, an innovative semi-supervised algorithm that provably balances between two extremes. When pseudo-labels are entirely incorrect, our method reduces to a training process solely using labeled data. Conversely, when pseudo-labels are completely accurate, our method transforms into a training process utilizing all pseudo-labeled data and labeled data, thus increasing the effective sample size. Through empirical evaluations on both the ImageNet dataset for image classification and the nuScenes autonomous driving dataset for 3D object detection, we demonstrate the superiority of the doubly-robust loss over the self-training baseline. | Doubly-Robust Self-Training | [
"Banghua Zhu",
"Mingyu Ding",
"Philip Jacobson",
"Ming Wu",
"Wei Zhan",
"Michael Jordan",
"Jiantao Jiao"
] | Conference | poster | [
"https://github.com/dingmyu/doubly-robust-self-training"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eqyhjLG5Nr | @inproceedings{
jagadeesan2023supplyside,
title={Supply-Side Equilibria in Recommender Systems},
author={Meena Jagadeesan and Nikhil Garg and Jacob Steinhardt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eqyhjLG5Nr}
} | Algorithmic recommender systems such as Spotify and Netflix affect not only consumer behavior but also *producer incentives*. Producers seek to create content that will be shown by the recommendation algorithm, which can impact both the diversity and quality of their content. In this work, we investigate the resulting supply-side equilibria in personalized content recommender systems. We model the decisions of producers as choosing *multi-dimensional* content vectors and users as having *heterogenous* preferences, which contrasts with classical low-dimensional models. Multi-dimensionality and heterogeneity creates the potential for *specialization*, where different producers create different types of content at equilibrium. Using a duality argument, we derive necessary and sufficient conditions for whether specialization occurs. Then, we characterize the distribution of content at equilibrium in concrete settings with two populations of users. Lastly, we show that specialization can enable producers to achieve *positive profit at equilibrium*, which means that specialization can reduce the competitiveness of the marketplace. At a conceptual level, our analysis of supply-side competition takes a step towards elucidating how personalized recommendations shape the marketplace of digital goods. | Supply-Side Equilibria in Recommender Systems | [
"Meena Jagadeesan",
"Nikhil Garg",
"Jacob Steinhardt"
] | Conference | poster | 2206.13489 | [
"https://github.com/mjagadeesan/supply-side-equilibria"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eozEoAtjG8 | @inproceedings{
chen2023understanding,
title={Understanding and Improving Feature Learning for Out-of-Distribution Generalization},
author={Yongqiang Chen and Wei Huang and Kaiwen Zhou and Yatao Bian and Bo Han and James Cheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eozEoAtjG8}
} | A common explanation for the failure of out-of-distribution (OOD) generalization is that the model trained with empirical risk minimization (ERM) learns spurious features instead of invariant features. However, several recent studies challenged this explanation and found that deep networks may have already learned sufficiently good features for OOD generalization. Despite the contradictions at first glance, we theoretically show that ERM essentially learns both spurious and invariant features, while ERM tends to learn spurious features faster if the spurious correlation is stronger. Moreover, when fed the ERM learned features to the OOD objectives, the invariant feature learning quality significantly affects the final OOD performance, as OOD objectives rarely learn new features. Therefore, ERM feature learning can be a bottleneck to OOD generalization. To alleviate the reliance, we propose Feature Augmented Training (FeAT), to enforce the model to learn richer features ready for OOD generalization. FeAT iteratively augments the model to learn new features while retaining the already learned features. In each round, the retention and augmentation operations are performed on different subsets of the training data that capture distinct features. Extensive experiments show that FeAT effectively learns richer features thus boosting the performance of various OOD objectives. | Understanding and Improving Feature Learning for Out-of-Distribution Generalization | [
"Yongqiang Chen",
"Wei Huang",
"Kaiwen Zhou",
"Yatao Bian",
"Bo Han",
"James Cheng"
] | Conference | poster | 2304.11327 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eoDNaH3pfB | @inproceedings{
kaplan2023blackbox,
title={Black-Box Differential Privacy for Interactive {ML}},
author={Haim Kaplan and Yishay Mansour and Shay Moran and Kobbi Nissim and Uri Stemmer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eoDNaH3pfB}
} | In this work we revisit an interactive variant of joint differential privacy, recently introduced by Naor et al. [2023], and generalize it towards handling online processes in which existing privacy definitions seem too restrictive. We study basic properties of this definition and demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
In order to demonstrate the advantages of this privacy definition compared to traditional forms of differential privacy,
we consider the basic setting of online classification. We show that any (possibly non-private) learning rule can be effectively transformed to a private learning rule with only a polynomial overhead in the mistake bound. This demonstrates a stark difference with traditional forms of differential privacy, such as the one studied by Golowich and Livni [2021], where only a double exponential overhead in the mistake bound is known (via an information theoretic upper bound). | Black-Box Differential Privacy for Interactive ML | [
"Haim Kaplan",
"Yishay Mansour",
"Shay Moran",
"Kobbi Nissim",
"Uri Stemmer"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=enfx8HM4Rp | @inproceedings{
yin2023train,
title={Train Once and Explain Everywhere: Pre-training Interpretable Graph Neural Networks},
author={Jun Yin and Chaozhuo Li and Hao Yan and Jianxun Lian and Senzhang Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=enfx8HM4Rp}
} | Intrinsic interpretable graph neural networks aim to provide transparent predictions by identifying the influential fraction of the input graph that guides the model prediction, i.e., the explanatory subgraph. However, current interpretable GNNs mostly are dataset-specific and hard to generalize to different graphs. A more generalizable GNN interpretation model which can effectively distill the universal structural patterns of different graphs is until-now unexplored. Motivated by the great success of recent pre-training techniques, we for the first time propose the Pre-training Interpretable Graph Neural Network ($\pi$-GNN) to distill the universal interpretability of GNNs by pre-training over synthetic graphs with ground-truth explanations. Specifically, we introduce a structural pattern learning module to extract diverse universal structure patterns and integrate them together to comprehensively represent the graphs of different types. Next, a hypergraph refining module is proposed to identify the explanatory subgraph by incorporating the universal structure patterns with local edge interactions. Finally, the task-specific predictor is cascaded with the pre-trained $\pi$-GNN model and fine-tuned over downstream tasks. Extensive experiments demonstrate that $\pi$-GNN significantly surpasses the leading interpretable GNN baselines with up to 9.98\% interpretation improvement and 16.06\% classification accuracy improvement. Meanwhile, $\pi$-GNN pre-trained on graph classification task also achieves the top-tier interpretation performance on node classification task, which further verifies its promising generalization performance among different downstream tasks. Our code and datasets are available at https://anonymous.4open.science/r/PI-GNN-F86C | Train Once and Explain Everywhere: Pre-training Interpretable Graph Neural Networks | [
"Jun Yin",
"Chaozhuo Li",
"Hao Yan",
"Jianxun Lian",
"Senzhang Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=en4LGxpd9E | @inproceedings{
alabdulmohsin2023getting,
title={Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design},
author={Ibrahim Alabdulmohsin and Xiaohua Zhai and Alexander Kolesnikov and Lucas Beyer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=en4LGxpd9E}
} | Scaling laws have been recently employed to derive compute-optimal model size (number of parameters) for a given compute duration. We advance and refine such methods to infer compute-optimal model shapes, such as width and depth, and successfully implement this in vision transformers. Our shape-optimized vision transformer, SoViT, achieves results competitive with models that exceed twice its size, despite being pre-trained with an equivalent amount of compute. For example, SoViT-400m/14 achieves 90.3% fine-tuning accuracy on ILSRCV2012, surpassing the much larger ViT-g/14 and approaching ViT-G/14 under identical settings, with also less than half the inference cost. We conduct a thorough evaluation across multiple tasks, such as image classification, captioning, VQA and zero-shot transfer, demonstrating the effectiveness of our model across a broad range of domains and identifying limitations. Overall, our findings challenge the prevailing approach of blindly scaling up vision models and pave a path for a more informed scaling. | Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design | [
"Ibrahim Alabdulmohsin",
"Xiaohua Zhai",
"Alexander Kolesnikov",
"Lucas Beyer"
] | Conference | poster | 2305.13035 | [
"https://github.com/google-research/big_vision"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=elPtHcfjpH | @inproceedings{
mirza2023lafter,
title={La{FT}er: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections},
author={Muhammad Jehanzeb Mirza and Leonid Karlinsky and Wei Lin and Horst Possegger and Mateusz Kozinski and Rogerio Feris and Horst Bischof},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=elPtHcfjpH}
} | Recently, large-scale pre-trained Vision and Language (VL) models have set a new state-of-the-art (SOTA) in zero-shot visual classification enabling open-vocabulary recognition of potentially unlimited set of categories defined as simple language prompts. However, despite these great advances, the performance of these zero-shot classifiers still falls short of the results of dedicated (closed category set) classifiers trained with supervised fine-tuning. In this paper we show, for the first time, how to reduce this gap without any labels and without any paired VL data, using an unlabeled image collection and a set of texts auto-generated using a Large Language Model (LLM) describing the categories of interest and effectively substituting labeled visual instances of those categories. Using our label-free approach, we are able to attain significant performance improvements over the zero-shot performance of the base VL model and other contemporary methods and baselines on a wide variety of datasets, demonstrating absolute improvement of up to $11.7\%$ ($3.8\%$ on average) in the label-free setting. Moreover, despite our approach being label-free, we observe $1.3\%$ average gains over leading few-shot prompting baselines that do use 5-shot supervision. | LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections | [
"Muhammad Jehanzeb Mirza",
"Leonid Karlinsky",
"Wei Lin",
"Horst Possegger",
"Mateusz Kozinski",
"Rogerio Feris",
"Horst Bischof"
] | Conference | poster | 2305.18287 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ekMLUoC2sq | @inproceedings{
agmon2023simultaneous,
title={Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization},
author={Haggai Agmon and Yoram Burak},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ekMLUoC2sq}
} | The storage of continuous variables in working memory is hypothesized to be sustained in the brain by the dynamics of recurrent neural networks (RNNs) whose steady states form continuous manifolds. In some cases, it is thought that the synaptic connectivity supports multiple attractor manifolds, each mapped to a different context or task. For example, in hippocampal area CA3, positions in distinct environments are represented by distinct sets of population activity patterns, each forming a continuum. It has been argued that the embedding of multiple continuous attractors in a single RNN inevitably causes detrimental interference: quenched noise in the synaptic connectivity disrupts the continuity of each attractor, replacing it by a discrete set of steady states that can be conceptualized as lying on local minima of an abstract energy landscape. Consequently, population activity patterns exhibit systematic drifts towards one of these discrete minima, thereby degrading the stored memory over time. Here we show that it is possible to dramatically attenuate these detrimental interference effects by adjusting the synaptic weights. Synaptic weight adjustment are derived from a loss function that quantifies the roughness of the energy landscape along each of the embedded attractor manifolds. By minimizing this loss function, the stability of states can be dramatically improved, without compromising the capacity. | Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization | [
"Haggai Agmon",
"Yoram Burak"
] | Conference | poster | 2310.18708 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eibTaY6qGI | @inproceedings{
letzelter2023resilient,
title={Resilient Multiple Choice Learning: A learned scoring scheme with application to audio scene analysis},
author={Victor Letzelter and Mathieu Fontaine and Mickael Chen and Patrick Perez and Slim Essid and Ga{\"e}l Richard},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eibTaY6qGI}
} | We introduce Resilient Multiple Choice Learning (rMCL), an extension of the MCL approach for conditional distribution estimation in regression settings where multiple targets may be sampled for each training input.
Multiple Choice Learning is a simple framework to tackle multimodal density estimation, using the Winner-Takes-All (WTA) loss for a set of hypotheses. In regression settings, the existing MCL variants focus on merging the hypotheses, thereby eventually sacrificing the diversity of the predictions. In contrast, our method relies on a novel learned scoring scheme underpinned by a mathematical framework based on Voronoi tessellations of the output space, from which we can derive a probabilistic interpretation.
After empirically validating rMCL with experiments on synthetic data, we further assess its merits on the sound source localization problem, demonstrating its practical usefulness and the relevance of its interpretation. | Resilient Multiple Choice Learning: A learned scoring scheme with application to audio scene analysis | [
"Victor Letzelter",
"Mathieu Fontaine",
"Mickael Chen",
"Patrick Perez",
"Slim Essid",
"Gaël Richard"
] | Conference | poster | 2311.01052 | [
"https://github.com/victorletzelter/code-rmcl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eeeqORvJbf | @inproceedings{
zou2023learning,
title={Learning and processing the ordinal information of temporal sequences in recurrent neural circuits},
author={Xiaolong Zou and Zhikun Chu and Qinghai Guo and Jie Cheng and Bo Hong and Si Wu and Yuanyuan Mi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eeeqORvJbf}
} | Temporal sequence processing is fundamental in brain cognitive functions.
Experimental data has indicated that the representations of ordinal information and contents of temporal sequences are disentangled in the brain, but the neural mechanism underlying this disentanglement remains largely unclear. Here, we investigate how recurrent neural circuits learn to represent the abstract order structure of temporal sequences, and how this disentangled representation of order structure from that of contents facilitates the processing of temporal sequences. We show that with an appropriate learn protocol, a recurrent neural circuit can learn a set of tree-structured attractor states to encode the corresponding tree-structured orders of given temporal sequences. This abstract temporal order template can then be bound with different contents, allowing for flexible and robust temporal sequence processing. Using a transfer learning task, we demonstrate that the reuse of a temporal order template facilitates the acquisition of new temporal sequences of the same or similar ordinal structure. Using a key-word spotting task, we demonstrate that the attractor representation of order structure improves the robustness of temporal sequence discrimination, if the ordinal information is the key to differentiate different sequences. We hope this study gives us insights into the neural mechanism of representing the ordinal information of temporal sequences in the brain, and helps us to develop brain-inspired temporal sequence processing algorithms. | Learning and processing the ordinal information of temporal sequences in recurrent neural circuits | [
"Xiaolong Zou",
"Zhikun Chu",
"Qinghai Guo",
"Jie Cheng",
"Bo Hong",
"Si Wu",
"Yuanyuan Mi"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ecRaDicXxw | @inproceedings{
huang2023diffvl,
title={Diff{VL}: Scaling Up Soft Body Manipulation using Vision-Language Driven Differentiable Physics},
author={Zhiao Huang and Feng Chen and Yewen Pu and Chunru Lin and Hao Su and Chuang Gan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ecRaDicXxw}
} | Combining gradient-based trajectory optimization with differentiable physics simulation is an efficient technique for solving soft-body manipulation problems.
Using a well-crafted optimization objective, the solver can quickly converge onto a valid trajectory.
However, writing the appropriate objective functions requires expert knowledge, making it difficult to collect a large set of naturalistic problems from non-expert users.
We introduce DiffVL, a method that enables non-expert users to communicate soft-body manipulation tasks -- a combination of vision and natural language, given in multiple stages -- that can be readily leveraged by a differential physics solver.
We have developed GUI tools that enable non-expert users to specify 100 tasks inspired by real-life soft-body manipulations from online videos, which we'll make public.
We leverage large language models to translate task descriptions into machine-interpretable optimization objectives. The optimization objectives can help differentiable physics solvers to solve these long-horizon multistage tasks that are challenging for previous baselines. | DiffVL: Scaling Up Soft Body Manipulation using Vision-Language Driven Differentiable Physics | [
"Zhiao Huang",
"Feng Chen",
"Yewen Pu",
"Chunru Lin",
"Hao Su",
"Chuang Gan"
] | Conference | poster | 2312.06408 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ebMPmx5mr7 | @inproceedings{
paischer2023semantic,
title={Semantic {HELM}: A Human-Readable Memory for Reinforcement Learning},
author={Fabian Paischer and Thomas Adler and Markus Hofmarcher and Sepp Hochreiter},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ebMPmx5mr7}
} | Reinforcement learning agents deployed in the real world often have to cope with partially observable environments.
Therefore, most agents employ memory mechanisms to approximate the state of the environment.
Recently, there have been impressive success stories in mastering partially observable environments, mostly in the realm of computer games like Dota 2, StarCraft II, or MineCraft.
However, existing methods lack interpretability in the sense that it is not comprehensible for humans what the agent stores in its memory.
In this regard, we propose a novel memory mechanism that represents past events in human language.
Our method uses CLIP to associate visual inputs with language tokens.
Then we feed these tokens to a pretrained language model that serves the agent as memory and provides it with a coherent and human-readable representation of the past.
We train our memory mechanism on a set of partially observable environments and find that it excels on tasks that require a memory component, while mostly attaining performance on-par with strong baselines on tasks that do not.
On a challenging continuous recognition task, where memorizing the past is crucial, our memory mechanism converges two orders of magnitude faster than prior methods.
Since our memory mechanism is human-readable, we can peek at an agent's memory and check whether crucial pieces of information have been stored.
This significantly enhances troubleshooting and paves the way toward more interpretable agents. | Semantic HELM: A Human-Readable Memory for Reinforcement Learning | [
"Fabian Paischer",
"Thomas Adler",
"Markus Hofmarcher",
"Sepp Hochreiter"
] | Conference | poster | 2306.09312 | [
"https://github.com/ml-jku/helm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eZbqD9BoXe | @inproceedings{
wu2023graphstructured,
title={Graph-Structured Gaussian Processes for Transferable Graph Learning},
author={Jun Wu and Lisa Ainsworth and Andrew Leakey and Haixun Wang and Jingrui He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eZbqD9BoXe}
} | Transferable graph learning involves knowledge transferability from a source graph to a relevant target graph. The major challenge of transferable graph learning is the distribution shift between source and target graphs induced by individual node attributes and complex graph structures. To solve this problem, in this paper, we propose a generic graph-structured Gaussian process framework (GraphGP) for adaptively transferring knowledge across graphs with either homophily or heterophily assumptions. Specifically, GraphGP is derived from a novel graph structure-aware neural network in the limit on the layer width. The generalization analysis of GraphGP explicitly investigates the connection between knowledge transferability and graph domain similarity. Extensive experiments on several transferable graph learning benchmarks demonstrate the efficacy of GraphGP over state-of-the-art Gaussian process baselines. | Graph-Structured Gaussian Processes for Transferable Graph Learning | [
"Jun Wu",
"Lisa Ainsworth",
"Andrew Leakey",
"Haixun Wang",
"Jingrui He"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eYCGrGdKf3 | @inproceedings{
zhang2023unleash,
title={Unleash the Potential of Image Branch for Cross-modal 3D Object Detection},
author={Yifan Zhang and Qijian Zhang and Junhui Hou and Yixuan Yuan and Guoliang Xing},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eYCGrGdKf3}
} | To achieve reliable and precise scene understanding, autonomous vehicles typically incorporate multiple sensing modalities to capitalize on their complementary attributes. However, existing cross-modal 3D detectors do not fully utilize the image domain information to address the bottleneck issues of the LiDAR-based detectors. This paper presents a new cross-modal 3D object detector, namely UPIDet, which aims to unleash the potential of the image branch from two aspects. First, UPIDet introduces a new 2D auxiliary task called normalized local coordinate map estimation. This approach enables the learning of local spatial-aware features from the image modality to supplement sparse point clouds. Second, we discover that the representational capability of the point cloud backbone can be enhanced through the gradients backpropagated from the training objectives of the image branch, utilizing a succinct and effective point-to-pixel module. Extensive experiments and ablation studies validate the effectiveness of our method. Notably, we achieved the top rank in the highly competitive cyclist class of the KITTI benchmark at the time of submission. The source code is available at https://github.com/Eaphan/UPIDet. | Unleash the Potential of Image Branch for Cross-modal 3D Object Detection | [
"Yifan Zhang",
"Qijian Zhang",
"Junhui Hou",
"Yixuan Yuan",
"Guoliang Xing"
] | Conference | poster | 2301.09077 | [
"https://github.com/eaphan/upidet"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eXubleMT0q | @inproceedings{
ran2023penguin,
title={Penguin: Parallel-Packed Homomorphic Encryption for Fast Graph Convolutional Network Inference},
author={Ran Ran and Nuo Xu and Tao Liu and Wei Wang and Gang Quan and Wujie Wen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eXubleMT0q}
} | The marriage of Graph Convolutional Network (GCN) and Homomorphic Encryption (HE) enables the inference of graph data on the cloud with significantly enhanced client data privacy. However, the tremendous computation and memory overhead associated with HE operations challenges the practicality of HE-based GCN inference. GCN inference involves a sequence of expensive matrix-matrix multiplications, and we observe that directly applying the state-of-the-art HE-based secure matrix-matrix multiplication solutions to accelerate HE-GCN inference is far less efficient as it does not exploit the unique aggregation mechanism of two-dimension graph node-features in GCN layer computation.
As a result, in this paper, we propose a novel HE-based ciphertext packing technique, i.e., Penguin, that can take advantage of the unique computation pattern during the HE-GCN inference to significantly reduce the computation and memory overhead associated with HE operations.
Specifically, Penguin employs (i) an effective two-dimension parallel packing technique for feature ciphertext with optimal graph node partitioning and graph feature interleaving, and (ii) an interleaved assembly technique that can effectively make use of the blank slots to merge ciphertexts after feature reduction and significantly reduce the costly rotation operation.
We provide theoretical analysis and experimental validation to demonstrate the speedup achieved by Penguin in accelerating GCN inference using popular GCN models and datasets. Our results show that Penguin can achieve up to $\sim10\times$ speedup and around $\sim79$% reduction in computational memory overhead, significantly outperforming state-of-the-art solutions. To the best of our knowledge, this is the first work that can ensure the protection of both graph structure and features when accelerating HE-GCN inference on encrypted data. Our code is publicly available at https://github.com/ranran0523/Penguin. | Penguin: Parallel-Packed Homomorphic Encryption for Fast Graph Convolutional Network Inference | [
"Ran Ran",
"Nuo Xu",
"Tao Liu",
"Wei Wang",
"Gang Quan",
"Wujie Wen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eX6xDto3Ed | @inproceedings{
nguyen2023flat,
title={Flat Seeking Bayesian Neural Networks},
author={Van-Anh Nguyen and Long Tung Vuong and Hoang Phan and Thanh-Toan Do and Dinh Phung and Trung Le},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eX6xDto3Ed}
} | Bayesian Neural Networks (BNNs) provide a probabilistic interpretation for deep learning models by imposing a prior distribution over model parameters and inferring a posterior distribution based on observed data. The model sampled from the posterior distribution can be used for providing ensemble predictions and quantifying prediction uncertainty. It is well-known that deep learning models with lower sharpness have better generalization ability. However, existing posterior inferences are not aware of sharpness/flatness in terms of formulation, possibly leading to high sharpness for the models sampled from them. In this paper, we develop theories, the Bayesian setting, and the variational inference approach for the sharpness-aware posterior. Specifically, the models sampled from our sharpness-aware posterior, and the optimal approximate posterior estimating this sharpness-aware posterior, have better flatness, hence possibly possessing higher generalization ability. We conduct experiments by leveraging the sharpness-aware posterior with state-of-the-art Bayesian Neural Networks, showing that the flat-seeking counterparts outperform their baselines in all metrics of interest. | Flat Seeking Bayesian Neural Networks | [
"Van-Anh Nguyen",
"Long Tung Vuong",
"Hoang Phan",
"Thanh-Toan Do",
"Dinh Phung",
"Trung Le"
] | Conference | poster | 2302.02713 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eWKqr1zcRv | @inproceedings{
wu2023practical,
title={Practical and Asymptotically Exact Conditional Sampling in Diffusion Models},
author={Luhuan Wu and Brian L. Trippe and Christian A Naesseth and John Patrick Cunningham and David Blei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eWKqr1zcRv}
} | Diffusion models have been successful on a range of conditional generation tasks including molecular design and text-to-image generation. However, these achievements have primarily depended on task-specific conditional training or error-prone heuristic approximations. Ideally, a conditional generation method should provide exact samples for a broad range of conditional distributions without requiring task-specific training. To this end, we introduce the Twisted Diffusion Sampler, or TDS. TDS is a sequential Monte Carlo (SMC) algorithm that targets the conditional distributions of diffusion models through simulating a set of weighted particles. The main idea is to use twisting, an SMC technique that enjoys good computational efficiency, to incorporate heuristic approximations without compromising asymptotic exactness. We first find in simulation and in conditional image generation tasks that TDS provides a computational statistical trade-off, yielding more accurate approximations with many particles but with empirical improvements over heuristics with as few as two particles. We then turn to motif-scaffolding, a core task in protein design, using a TDS extension to Riemannian diffusion models; on benchmark tasks, TDS allows flexible conditioning criteria and often outperforms the state-of-the-art, conditionally trained model. Code can be found in https://github.com/blt2114/twisted_diffusion_sampler | Practical and Asymptotically Exact Conditional Sampling in Diffusion Models | [
"Luhuan Wu",
"Brian L. Trippe",
"Christian A Naesseth",
"David Blei",
"John Patrick Cunningham"
] | Conference | poster | 2306.17775 | [
"https://github.com/blt2114/twisted_diffusion_sampler"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eW233GDOpm | @inproceedings{
zheng2023response,
title={Response Length Perception and Sequence Scheduling: An {LLM}-Empowered {LLM} Inference Pipeline},
author={Zangwei Zheng and Xiaozhe Ren and Fuzhao Xue and Yang Luo and Xin Jiang and Yang You},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eW233GDOpm}
} | Large language models (LLMs) have revolutionized the field of AI, demonstrating unprecedented capacity across various tasks. However, the inference process for LLMs comes with significant computational costs. In this paper, we propose an efficient LLM inference pipeline that harnesses the power of LLMs. Our approach begins by tapping into the potential of LLMs to accurately perceive and predict the response length with minimal overhead. By leveraging this information, we introduce an efficient sequence scheduling technique that groups queries with similar response lengths into micro-batches. We evaluate our approach on real-world instruction datasets using the LLaMA-based model, and our results demonstrate an impressive 86% improvement in inference throughput without compromising effectiveness. Notably, our method is orthogonal to other inference acceleration techniques, making it a valuable addition to many existing toolkits (e.g., FlashAttention, Quantization) for LLM inference. | Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline | [
"Zangwei Zheng",
"Xiaozhe Ren",
"Fuzhao Xue",
"Yang Luo",
"Xin Jiang",
"Yang You"
] | Conference | poster | [
"https://github.com/zhengzangw/sequence-scheduling"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eVrmcOvJV4 | @inproceedings{
chandra2023inferring,
title={Inferring the Future by Imagining the Past},
author={Kartik Chandra and Tony Chen and Tzu-Mao Li and Jonathan Ragan-Kelley and Joshua B. Tenenbaum},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eVrmcOvJV4}
} | A single panel of a comic book can say a lot: it can depict not only where the characters currently are, but also their motions, their motivations, their emotions, and what they might do next. More generally, humans routinely infer complex sequences of past and future events from a *static snapshot* of a *dynamic scene*, even in situations they have never seen before.
In this paper, we model how humans make such rapid and flexible inferences. Building on a long line of work in cognitive science, we offer a Monte Carlo algorithm whose inferences correlate well with human intuitions in a wide variety of domains, while only using a small, cognitively-plausible number of samples. Our key technical insight is a surprising connection between our inference problem and Monte Carlo path tracing, which allows us to apply decades of ideas from the computer graphics community to this seemingly-unrelated theory of mind task. | Inferring the Future by Imagining the Past | [
"Kartik Chandra",
"Tony Chen",
"Tzu-Mao Li",
"Jonathan Ragan-Kelley",
"Joshua B. Tenenbaum"
] | Conference | spotlight | 2305.17195 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eUf0CaS5AP | @inproceedings{
xu2023xagen,
title={{XAG}en: 3D Expressive Human Avatars Generation},
author={Zhongcong Xu and Jianfeng Zhang and Jun Hao Liew and Jiashi Feng and Mike Zheng Shou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eUf0CaS5AP}
} | Recent advances in 3D-aware GAN models have enabled the generation of realistic and controllable human body images. However, existing methods focus on the control of major body joints, neglecting the manipulation of expressive attributes, such as facial expressions, jaw poses, hand poses, and so on. In this work, we present XAGen, the first 3D generative model for human avatars capable of expressive control over body, face, and hands. To enhance the fidelity of small-scale regions like face and hands, we devise a multi-scale and multi-part 3D representation that models fine details. Based on this representation, we propose a multi-part rendering technique that disentangles the synthesis of body, face, and hands to ease model training and enhance geometric quality. Furthermore, we design multi-part discriminators that evaluate the quality of the generated avatars with respect to their appearance and fine-grained control capabilities. Experiments show that XAGen surpasses state-of-the-art methods in terms of realism, diversity, and expressive control abilities. Code and data will be made available at https://showlab.github.io/xagen. | XAGen: 3D Expressive Human Avatars Generation | [
"Zhongcong Xu",
"Jianfeng Zhang",
"Jun Hao Liew",
"Jiashi Feng",
"Mike Zheng Shou"
] | Conference | poster | 2311.13574 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eU6P4aUdCA | @inproceedings{
ma2023algorithmic,
title={Algorithmic Regularization in Tensor Optimization: Towards a Lifted Approach in Matrix Sensing},
author={Ziye Ma and Javad Lavaei and Somayeh Sojoudi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eU6P4aUdCA}
} | Gradient descent (GD) is crucial for generalization in machine learning models, as it induces implicit regularization, promoting compact representations. In this work, we examine the role of GD in inducing implicit regularization for tensor optimization, particularly within the context of the lifted matrix sensing framework. This framework has been recently proposed to address the non-convex matrix sensing problem by transforming spurious solutions into strict saddles when optimizing over symmetric, rank-1 tensors. We show that, with sufficiently small initialization scale, GD applied to this lifted problem results in approximate rank-1 tensors and critical points with escape directions. Our findings underscore the significance of the tensor parametrization of matrix sensing, in combination with first-order methods, in achieving global optimality in such problems. | Algorithmic Regularization in Tensor Optimization: Towards a Lifted Approach in Matrix Sensing | [
"Ziye Ma",
"Javad Lavaei",
"Somayeh Sojoudi"
] | Conference | poster | 2310.15549 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eTp4RetK74 | @inproceedings{
park2023aspen,
title={{ASPEN}: Breaking Operator Barriers for Efficient Parallelization of Deep Neural Networks},
author={Jongseok Park and Kyungmin Bin and Gibum Park and Sangtae Ha and Kyunghan Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eTp4RetK74}
} | Modern Deep Neural Network (DNN) frameworks use tensor operators as the main building blocks of DNNs. However, we observe that operator-based construction of DNNs incurs significant drawbacks in parallelism in the form of synchronization barriers. Synchronization barriers of operators confine the scope of parallel computation to each operator and obscure the rich parallel computation opportunities that exist across operators. To this end, we present ASPEN, a novel parallel computation solution for DNNs that achieves fine-grained dynamic execution of DNNs, which (1) removes the operator barriers and expresses DNNs in dataflow graphs of fine-grained tiles to expose the parallel computation opportunities across operators, and (2) exploits these opportunities by dynamically locating and scheduling them in runtime. This novel approach of ASPEN enables opportunistic parallelism, a new class of parallelism for DNNs that is unavailable in the existing operator-based approaches. ASPEN also achieves high resource utilization and memory reuse by letting each resource asynchronously traverse depthwise in the DNN graph to its full computing potential. We provide challenges and solutions to our approach and show that our proof-of-concept implementation of ASPEN on CPU shows exceptional performance, outperforming state-of-the-art inference systems of TorchScript and TVM by up to 3.2$\times$ and 4.3$\times$, respectively. | ASPEN: Breaking Operator Barriers for Efficient Parallelization of Deep Neural Networks | [
"Jongseok Park",
"Kyungmin Bin",
"Gibum Park",
"Sangtae Ha",
"Kyunghan Lee"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eTMHsUp3Ii | @inproceedings{
liu2023double,
title={Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee},
author={Yuanshi Liu and Cong Fang and Tong Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eTMHsUp3Ii}
} | This paper focuses on the high-dimensional sampling of log-concave distributions with composite structures: $p^*(\mathrm{d}x)\propto \exp(-g(x)-f(x))\mathrm{d}x$. We develop a double randomization technique, which leads to a fast underdamped Langevin algorithm with a dimension-independent convergence guarantee. We prove that the algorithm enjoys an overall $\tilde{\mathcal{O}}\left(\frac{\left(\mathrm{tr}(H)\right)^{1/3}}{\epsilon^{2/3}}\right)$ iteration complexity to reach an $\epsilon$-tolerated sample whose distribution $p$ admits $W_2(p,p^*)\leq \epsilon$. Here, $H$ is an upper bound of the Hessian matrices for $f$ and does not explicitly depend on dimension $d$. For the posterior sampling over linear models with normalized data, we show a clear superiority of convergence rate which is dimension-free and outperforms the previous best-known results by a $d^{1/3}$ factor. The analysis to achieve a faster convergence rate brings new insights into high-dimensional sampling. | Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee | [
"Yuanshi Liu",
"Cong Fang",
"Tong Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eTHawKFT4h | @inproceedings{
wild2023a,
title={A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods},
author={Veit David Wild and Sahra Ghalebikesabi and Dino Sejdinovic and Jeremias Knoblauch},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eTHawKFT4h}
} | We establish the first mathematically rigorous link between Bayesian, variational Bayesian, and ensemble methods. A key step towards this it to reformulate the non-convex optimisation problem typically encountered in deep learning as a convex optimisation in the space of probability measures. On a technical level, our contribution amounts to studying generalised variational inference through the lense of Wasserstein gradient flows. The result is a unified theory of various seemingly disconnected approaches that are commonly used for uncertainty quantification in deep learning---including deep ensembles and (variational) Bayesian methods. This offers a fresh perspective on the reasons behind the success of deep ensembles over procedures based on parameterised variational inference, and allows the derivation of new ensembling schemes with convergence guarantees. We showcase this by proposing a family of interacting deep ensembles with direct parallels to the interactions of particle systems in thermodynamics, and use our theory to prove the convergence of these algorithms to a well-defined global minimiser on the space of probability measures. | A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods | [
"Veit David Wild",
"Sahra Ghalebikesabi",
"Dino Sejdinovic",
"Jeremias Knoblauch"
] | Conference | oral | 2305.15027 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eTF3VDH2b6 | @inproceedings{
mukhoty2023direct,
title={Direct Training of {SNN} using Local Zeroth Order Method},
author={Bhaskar Mukhoty and Velibor Bojkovic and William de Vazelhes and Xiaohan Zhao and Giulia De Masi and Huan Xiong and Bin Gu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eTF3VDH2b6}
} | Spiking neural networks are becoming increasingly popular for their low energy requirement in real-world tasks with accuracy comparable to traditional ANNs. SNN training algorithms face the loss of gradient information and non-differentiability due to the Heaviside function in minimizing the model loss over model parameters. To circumvent this problem, the surrogate method employs a differentiable approximation of the Heaviside function in the backward pass, while the forward pass continues to use the Heaviside as the spiking function. We propose to use the zeroth-order technique at the local or neuron level in training SNNs, motivated by its regularizing and potential energy-efficient effects and establish a theoretical connection between it and the existing surrogate methods. We perform experimental validation of the technique on standard static datasets (CIFAR-10, CIFAR-100, ImageNet-100) and neuromorphic datasets (DVS-CIFAR-10, DVS-Gesture, N-Caltech-101, NCARS) and obtain results that offer improvement over the state-of-the-art results. The proposed method also lends itself to efficient implementations of the back-propagation method, which could provide 3-4 times overall speedup in training time. The code is available at \url{https://github.com/BhaskarMukhoty/LocalZO}. | Direct Training of SNN using Local Zeroth Order Method | [
"Bhaskar Mukhoty",
"Velibor Bojkovic",
"William de Vazelhes",
"Xiaohan Zhao",
"Giulia De Masi",
"Huan Xiong",
"Bin Gu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eT1tMdAUoc | @inproceedings{
xie2023spatially,
title={Spatially Resolved Gene Expression Prediction from Histology Images via Bi-modal Contrastive Learning},
author={Ronald Xie and Kuan Pang and Sai W Chung and Catia Perciani and Sonya MacParland and BO WANG and Gary Bader},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eT1tMdAUoc}
} | Histology imaging is an important tool in medical diagnosis and research, enabling the examination of tissue structure and composition at the microscopic level. Understanding the underlying molecular mechanisms of tissue architecture is critical in uncovering disease mechanisms and developing effective treatments.Gene expression profiling provides insight into the molecular processes underlying tissue architecture, but the process can be time-consuming and expensive. We present BLEEP (Bi-modaL Embedding for Expression Prediction), a bi-modal embedding framework capable of generating spatially resolved gene expression profiles of whole-slide Hematoxylin and eosin (H&E) stained histology images. BLEEP uses contrastive learning to construct a low-dimensional joint embedding space from a reference dataset using paired image and expression profiles at micrometer resolution. With this approach, the gene expression of any query image patch can be imputed using the expression profiles from the reference dataset. We demonstrate BLEEP’s effectiveness in gene expression prediction by benchmarking its performance on a human liver tissue dataset captured using the 10x Visium platform, where it achieves significant improvements over existing methods. Our results demonstrate the potential of BLEEP to provide insights into the molecular mechanisms underlying tissue architecture, with important implications in diagnosis and research of various diseases. The proposed approach can significantly reduce the time and cost associated with gene expression profiling, opening up new avenues for high-throughput analysis of histology images for both research and clinical applications. | Spatially Resolved Gene Expression Prediction from Histology Images via Bi-modal Contrastive Learning | [
"Ronald Xie",
"Kuan Pang",
"Sai W Chung",
"Catia Perciani",
"Sonya MacParland",
"BO WANG",
"Gary Bader"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eT1QOsssRB | @inproceedings{
halpern2023strategyproof,
title={Strategyproof Voting under Correlated Beliefs},
author={Daniel Halpern and Rachel Li and Ariel D. Procaccia},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eT1QOsssRB}
} | In voting theory, when voters have ranked preferences over candidates, the celebrated Gibbard-Satterthwaite Theorem essentially rules out the existence of reasonable strategyproof methods for picking a winner. What if we weaken strategyproofness to only hold for Bayesian voters with beliefs over others' preferences? When voters believe other participants' rankings are drawn independently from a fixed distribution, the impossibility persists. However, it is quite reasonable for a voter to believe that other votes are correlated, either to each other or to their own ranking. We consider such beliefs induced by classic probabilistic models in social choice such as the Mallows, Placket-Luce, and Thurstone-Mosteller models. We single out the plurality rule (choosing the candidate ranked first most often) as a particularly promising choice as it is strategyproof for a large class of beliefs containing the specific ones we introduce. Further, we show that plurality is unique among positional scoring rules in having this property: no other scoring rule is strategyproof for beliefs induced by the Mallows model when there are a sufficient number of voters. Finally, we give examples of prominent non-scoring voting rules failing to be strategyproof on beliefs in this class, further bolstering the case for plurality. | Strategyproof Voting under Correlated Beliefs | [
"Daniel Halpern",
"Rachel Li",
"Ariel D. Procaccia"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=eR7PrfJe9o | @inproceedings{
adomaityte2023classification,
title={Classification of Heavy-tailed Features in High Dimensions: a Superstatistical Approach},
author={Urte Adomaityte and Gabriele Sicuro and Pierpaolo Vivo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=eR7PrfJe9o}
} | We characterise the learning of a mixture of two clouds of data points with generic centroids via empirical risk minimisation in the high dimensional regime, under the assumptions of generic convex loss and convex regularisation. Each cloud of data points is obtained via a double-stochastic process, where the sample is obtained from a Gaussian distribution whose variance is itself a random parameter sampled from a scalar distribution $\varrho$. As a result, our analysis covers a large family of data distributions, including the case of power-law-tailed distributions with no covariance, and allows us to test recent ''Gaussian universality'' claims. We study the generalisation performance of the obtained estimator, we analyse the role of regularisation, and we analytically characterise the separability transition. | Classification of Heavy-tailed Features in High Dimensions: a Superstatistical Approach | [
"Urte Adomaityte",
"Gabriele Sicuro",
"Pierpaolo Vivo"
] | Conference | poster | 2304.02912 | [
"https://github.com/urteado/super_classification"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |