bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
11
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=jzseUq55eP | @inproceedings{
fishman2023metropolis,
title={Metropolis Sampling for Constrained Diffusion Models},
author={Nic Fishman and Leo Klarner and Emile Mathieu and Michael John Hutchinson and Valentin De Bortoli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jzseUq55eP}
} | Denoising diffusion models have recently emerged as the predominant paradigm for generative modelling on image domains. In addition, their extension to Riemannian manifolds has facilitated a range of applications across the natural sciences. While many of these problems stand to benefit from the ability to specify arbitrary, domain-informed constraints, this setting is not covered by the existing (Riemannian) diffusion model methodology. Recent work has attempted to address this issue by constructing novel noising processes based on the reflected Brownian motion and logarithmic barrier methods. However, the associated samplers are either computationally burdensome or only apply to convex subsets of Euclidean space. In this paper, we introduce an alternative, simple noising scheme based on Metropolis sampling that affords substantial gains in computational efficiency and empirical performance compared to the earlier samplers. Of independent interest, we prove that this new process corresponds to a valid discretisation of the reflected Brownian motion. We demonstrate the scalability and flexibility of our approach on a range of problem settings with convex and non-convex constraints, including applications from geospatial modelling, robotics and protein design. | Metropolis Sampling for Constrained Diffusion Models | [
"Nic Fishman",
"Leo Klarner",
"Emile Mathieu",
"Michael John Hutchinson",
"Valentin De Bortoli"
] | Conference | poster | 2307.05439 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jze2r6RDFz | @inproceedings{
schultheis2023generalized,
title={Generalized test utilities for long-tail performance in extreme multi-label classification},
author={Erik Schultheis and Marek Wydmuch and Wojciech Kotlowski and Rohit Babbar and Krzysztof Dembczynski},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jze2r6RDFz}
} | Extreme multi-label classification (XMLC) is the task of selecting a small subset of relevant labels from a very large set of possible labels.
As such, it is characterized by long-tail labels, i.e., most labels have very few positive instances. With standard performance measures such as precision@k, a classifier can ignore tail labels and still report good performance. However, it is often argued that correct predictions in the tail are more "interesting" or "rewarding," but the community has not yet settled on a metric capturing this intuitive concept. The existing propensity-scored metrics fall short on this goal by confounding the problems of long-tail and missing labels. In this paper, we analyze generalized metrics budgeted "at k" as an alternative solution. To tackle the challenging problem of optimizing these metrics, we formulate it in the expected test utility (ETU) framework, which aims to optimize the expected performance on a given test set. We derive optimal prediction rules and construct their computationally efficient approximations with provable regret guarantees and being robust against model misspecification. Our algorithm, based on block coordinate descent, scales effortlessly to XMLC problems and obtains promising results in terms of long-tail performance. | Generalized test utilities for long-tail performance in extreme multi-label classification | [
"Erik Schultheis",
"Marek Wydmuch",
"Wojciech Kotlowski",
"Rohit Babbar",
"Krzysztof Dembczynski"
] | Conference | poster | 2311.05081 | [
"https://github.com/mwydmuch/xcolumns"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jxhUNLoi4m | @inproceedings{
fu2023essinfogail,
title={Ess-Info{GAIL}: Semi-supervised Imitation Learning from Imbalanced Demonstrations},
author={Huiqiao Fu and Kaiqiang Tang and Yuanyang Lu and Yiming Qi and Guizhou Deng and Flood Sung and Chunlin Chen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jxhUNLoi4m}
} | Imitation learning aims to reproduce expert behaviors without relying on an explicit reward signal. However, real-world demonstrations often present challenges, such as multi-modal, data imbalance, and expensive labeling processes. In this work, we propose a novel semi-supervised imitation learning architecture that learns disentangled behavior representations from imbalanced demonstrations using limited labeled data. Specifically, our method consists of three key components. First, we adapt the concept of semi-supervised generative adversarial networks to the imitation learning context. Second, we employ a learnable latent distribution to align the generated and expert data distributions. Finally, we utilize a regularized information maximization approach in conjunction with an approximate label prior to further improve the semi-supervised learning performance. Experimental results demonstrate the efficiency of our method in learning multi-modal behaviors from imbalanced demonstrations compared to baseline methods. | Ess-InfoGAIL: Semi-supervised Imitation Learning from Imbalanced Demonstrations | [
"Huiqiao Fu",
"Kaiqiang Tang",
"Yuanyang Lu",
"Yiming Qi",
"Guizhou Deng",
"Flood Sung",
"Chunlin Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jvYXln6Gzn | @inproceedings{
sheth2023auxiliary,
title={Auxiliary Losses for Learning Generalizable Concept-based Models},
author={Ivaxi Sheth and Samira Ebrahimi Kahou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jvYXln6Gzn}
} | The increasing use of neural networks in various applications has lead to increasing apprehensions, underscoring the necessity to understand their operations beyond mere final predictions. As a solution to enhance model transparency, Concept Bottleneck Models (CBMs) have gained popularity since their introduction. CBMs essentially limit the latent space of a model to human-understandable high-level concepts. While beneficial, CBMs have been reported to often learn irrelevant concept representations that consecutively damage model performance. To overcome the performance trade-off, we propose a cooperative-Concept Bottleneck Model (coop-CBM). The concept representation of our model is particularly meaningful when fine-grained concept labels are absent. Furthermore, we introduce the concept orthogonal loss (COL) to encourage the separation between the concept representations and to reduce the intra-concept distance. This paper presents extensive experiments on real-world datasets for image classification tasks, namely CUB, AwA2, CelebA and TIL. We also study the performance of coop-CBM models under various distributional shift settings. We show that our proposed method achieves higher accuracy in all distributional shift settings even compared to the black-box models with the highest concept accuracy. | Auxiliary Losses for Learning Generalizable Concept-based Models | [
"Ivaxi Sheth",
"Samira Ebrahimi Kahou"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jvEbQBxd8X | @inproceedings{
chen2023improving,
title={Improving Language Plasticity via Pretraining with Active Forgetting},
author={Yihong Chen and Kelly Marchisio and Roberta Raileanu and David Ifeoluwa Adelani and Pontus Stenetorp and Sebastian Riedel and Mikel Artetxe},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jvEbQBxd8X}
} | Pretrained language models (PLMs) are today the primary model for natural language processing. Despite their impressive downstream performance, it can be difficult to apply PLMs to new languages, a barrier to making their capabilities universally accessible. While prior work has shown it possible to address this issue by learning a new embedding layer for the new language, doing so is both data and compute inefficient. We propose to use an active forgetting mechanism during pretraining, as a simple way of creating PLMs that can quickly adapt to new languages. Concretely, by resetting the embedding layer every K updates during pretraining, we encourage the PLM to improve its ability of learning new embeddings within limited number of updates, similar to a meta-learning effect. Experiments with RoBERTa show that models pretrained with our forgetting mechanism not only demonstrate faster convergence during language adaptation, but also outperform standard ones in a low-data regime, particularly for languages that are distant from English. Code will be available at https://github.com/facebookresearch/language-model-plasticity. | Improving Language Plasticity via Pretraining with Active Forgetting | [
"Yihong Chen",
"Kelly Marchisio",
"Roberta Raileanu",
"David Ifeoluwa Adelani",
"Pontus Stenetorp",
"Sebastian Riedel",
"Mikel Artetxe"
] | Conference | poster | 2307.01163 | [
""
] | https://huggingface.co/papers/2307.01163 | 5 | 6 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=jucDLW6G9l | @inproceedings{
nikishin2023deep,
title={Deep Reinforcement Learning with Plasticity Injection},
author={Evgenii Nikishin and Junhyuk Oh and Georg Ostrovski and Clare Lyle and Razvan Pascanu and Will Dabney and Andre Barreto},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jucDLW6G9l}
} | A growing body of evidence suggests that neural networks employed in deep reinforcement learning (RL) gradually lose their plasticity, the ability to learn from new data; however, the analysis and mitigation of this phenomenon is hampered by the complex relationship between plasticity, exploration, and performance in RL. This paper introduces plasticity injection, a minimalistic intervention that increases the network plasticity without changing the number of trainable parameters or biasing the predictions. The applications of this intervention are two-fold: first, as a diagnostic tool — if injection increases the performance, we may conclude that an agent's network was losing its plasticity. This tool allows us to identify a subset of Atari environments where the lack of plasticity causes performance plateaus, motivating future studies on understanding and combating plasticity loss. Second, plasticity injection can be used to improve the computational efficiency of RL training if the agent has to re-learn from scratch due to exhausted plasticity or by growing the agent's network dynamically without compromising performance. The results on Atari show that plasticity injection attains stronger performance compared to alternative methods while being computationally efficient. | Deep Reinforcement Learning with Plasticity Injection | [
"Evgenii Nikishin",
"Junhyuk Oh",
"Georg Ostrovski",
"Clare Lyle",
"Razvan Pascanu",
"Will Dabney",
"Andre Barreto"
] | Conference | spotlight | 2305.15555 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jtiQ26sCJi | @inproceedings{
copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre D{\'e}fossez},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jtiQ26sCJi}
} | We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, both mono and stereo, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen. Music samples, code, and models are available at https://github.com/facebookresearch/audiocraft | Simple and Controllable Music Generation | [
"Jade Copet",
"Felix Kreuk",
"Itai Gat",
"Tal Remez",
"David Kant",
"Gabriel Synnaeve",
"Yossi Adi",
"Alexandre Défossez"
] | Conference | poster | 2306.05284 | [
"https://github.com/facebookresearch/audiocraft"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jt10uWlEbc | @inproceedings{
b{\"o}ker2023finegrained,
title={Fine-grained Expressivity of Graph Neural Networks},
author={Jan B{\"o}ker and Ron Levie and Ningyuan Teresa Huang and Soledad Villar and Christopher Morris},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jt10uWlEbc}
} | Numerous recent works have analyzed the expressive power of message-passing graph neural networks (MPNNs), primarily utilizing combinatorial techniques such as the $1$-dimensional Weisfeiler--Leman test ($1$-WL) for the graph isomorphism problem. However, the graph isomorphism objective is inherently binary, not giving insights into the degree of similarity between two given graphs. This work resolves this issue by considering continuous extensions of both $1$-WL and MPNNs to graphons. Concretely, we show that the continuous variant of $1$-WL delivers an accurate topological characterization of the expressive power of MPNNs on graphons, revealing which graphs these networks can distinguish and the level of difficulty in separating them. We identify the finest topology where MPNNs separate points and prove a universal approximation theorem. Consequently, we provide a theoretical framework for graph and graphon similarity combining various topological variants of classical characterizations of the $1$-WL. In particular, we characterize the expressive power of MPNNs in terms of the tree distance, which is a graph distance based on the concept of fractional isomorphisms, and substructure counts via tree homomorphisms, showing that these concepts have the same expressive power as the $1$-WL and MPNNs on graphons. Empirically, we validate our theoretical findings by showing that randomly initialized MPNNs, without training, exhibit competitive performance compared to their trained counterparts. Moreover, we evaluate different MPNN architectures based on their ability to preserve graph distances, highlighting the significance of our continuous $1$-WL test in understanding MPNNs' expressivity. | Fine-grained Expressivity of Graph Neural Networks | [
"Jan Böker",
"Ron Levie",
"Ningyuan Teresa Huang",
"Soledad Villar",
"Christopher Morris"
] | Conference | poster | 2306.03698 | [
"https://github.com/nhuang37/finegrain_expressivity_gnn"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jown9RvYn7 | @inproceedings{
wu2023domain,
title={Domain Re-Modulation for Few-Shot Generative Domain Adaptation},
author={Yi Wu and Ziqiang Li and Chaoyue Wang and Heliang Zheng and Shanshan Zhao and Bin Li and Dacheng Tao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jown9RvYn7}
} | In this study, we delve into the task of few-shot Generative Domain Adaptation (GDA), which involves transferring a pre-trained generator from one domain to a new domain using only a few reference images. Inspired by the way human brains acquire knowledge in new domains, we present an innovative generator structure called $\textbf{Domain Re-Modulation (DoRM)}$. DoRM not only meets the criteria of $\textit{high quality}$, $\textit{large synthesis diversity}$, and $\textit{cross-domain consistency}$, which were achieved by previous research in GDA, but also incorporates $\textit{memory}$ and $\textit{domain association}$, akin to how human brains operate. Specifically, DoRM freezes the source generator and introduces new mapping and affine modules (M\&A modules) to capture the attributes of the target domain during GDA. This process resembles the formation of new synapses in human brains. Consequently, a linearly combinable domain shift occurs in the style space. By incorporating multiple new M\&A modules, the generator gains the capability to perform high-fidelity multi-domain and hybrid-domain generation. Moreover, to maintain cross-domain consistency more effectively, we introduce a similarity-based structure loss. This loss aligns the auto-correlation map of the target image with its corresponding auto-correlation map of the source image during training. Through extensive experiments, we demonstrate the superior performance of our DoRM and similarity-based structure loss in few-shot GDA, both quantitatively and qualitatively. Code will be available at https://github.com/wuyi2020/DoRM. | Domain Re-Modulation for Few-Shot Generative Domain Adaptation | [
"Yi Wu",
"Ziqiang Li",
"Chaoyue Wang",
"Heliang Zheng",
"Shanshan Zhao",
"Bin Li",
"Dacheng Tao"
] | Conference | poster | 2302.02550 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jooPcatnVF | @inproceedings{
wang2023implicit,
title={Implicit Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis},
author={Zhu Wang and Sourav Medya and Sathya N. Ravi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jooPcatnVF}
} | Deep network models are often purely inductive during both training and inference on unseen data. When these models are used for prediction, but they may fail to capture important semantic information and implicit dependencies within datasets. Recent advancements have shown that combining multiple modalities in large-scale vision and language settings can improve understanding and generalization performance. However, as the model size increases, fine-tuning and deployment become computationally expensive, even for a small number of downstream tasks. Moreover, it is still unclear how domain or prior modal knowledge can be specified in a backpropagation friendly manner, especially in large-scale and noisy settings. To address these challenges, we propose a simplified alternative of combining features from pretrained deep networks and freely available semantic explicit knowledge. In order to remove irrelevant explicit knowledge that does not correspond well to the images, we introduce an implicit Differentiable Out-of-Distribution (OOD) detection layer. This layer addresses outlier detection by solving for fixed points of a differentiable function and using the last iterate of fixed point solver to backpropagate. In practice, we apply our model on several vision and language downstream tasks including visual question answering, visual reasoning, and image-text retrieval on different datasets. Our experiments show that it is possible to design models that perform similarly to state-of-the-art results but with significantly fewer samples and less training time. Our models and code are available here: https://github.com/ellenzhuwang/implicit_vkood | Implicit Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis | [
"Zhu Wang",
"Sourav Medya",
"Sathya N. Ravi"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jnIBiP2di1 | @inproceedings{
li2023learning,
title={Learning Reliable Logical Rules with {SATN}et},
author={Zhaoyu Li and Jinpei Guo and Yuhe Jiang and Xujie Si},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jnIBiP2di1}
} | Bridging logical reasoning and deep learning is crucial for advanced AI systems. In this work, we present a new framework that addresses this goal by generating interpretable and verifiable logical rules through differentiable learning, without relying on pre-specified logical structures. Our approach builds upon SATNet, a differentiable MaxSAT solver that learns the underlying rules from input-output examples. Despite its efficacy, the learned weights in SATNet are not straightforwardly interpretable, failing to produce human-readable rules. To address this, we propose a novel specification method called ``maximum equality'', which enables the interchangeability between the learned weights of SATNet and a set of propositional logical rules in weighted MaxSAT form. With the decoded weighted MaxSAT formula, we further introduce several effective verification techniques to validate it against the ground truth rules. Experiments on stream transformations and Sudoku problems show that our decoded rules are highly reliable: using exact solvers on them could achieve 100% accuracy, whereas the original SATNet fails to give correct solutions in many cases. Furthermore, we formally verify that our decoded logical rules are functionally equivalent to the ground truth ones. | Learning Reliable Logical Rules with SATNet | [
"Zhaoyu Li",
"Jinpei Guo",
"Yuhe Jiang",
"Xujie Si"
] | Conference | poster | 2310.02133 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jnCPN1vpSR | @inproceedings{
kacprzyk2023dcipher,
title={D-{CIPHER}: Discovery of Closed-form Partial Differential Equations},
author={Krzysztof Kacprzyk and Zhaozhi Qian and Mihaela van der Schaar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jnCPN1vpSR}
} | Closed-form differential equations, including partial differential equations and higher-order ordinary differential equations, are one of the most important tools used by scientists to model and better understand natural phenomena. Discovering these equations directly from data is challenging because it requires modeling relationships between various derivatives that are not observed in the data (equation-data mismatch) and it involves searching across a huge space of possible equations. Current approaches make strong assumptions about the form of the equation and thus fail to discover many well-known phenomena. Moreover, many of them resolve the equation-data mismatch by estimating the derivatives, which makes them inadequate for noisy and infrequent observations. To this end, we propose D-CIPHER, which is robust to measurement artifacts and can uncover a new and very general class of differential equations. We further design a novel optimization procedure, CoLLie, to help D-CIPHER search through this class efficiently. Finally, we demonstrate empirically that it can discover many well-known equations that are beyond the capabilities of current methods. | D-CIPHER: Discovery of Closed-form Partial Differential Equations | [
"Krzysztof Kacprzyk",
"Zhaozhi Qian",
"Mihaela van der Schaar"
] | Conference | poster | 2206.10586 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jl5a3t78Uh | @inproceedings{
scellier2023energybased,
title={Energy-based learning algorithms for analog computing: a comparative study},
author={Benjamin Scellier and Maxence Ernoult and Jack Kendall and Suhas Kumar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jl5a3t78Uh}
} | Energy-based learning algorithms have recently gained a surge of interest due to their compatibility with analog (post-digital) hardware. Existing algorithms include contrastive learning (CL), equilibrium propagation (EP) and coupled learning (CpL), all consisting in contrasting two states, and differing in the type of perturbation used to obtain the second state from the first one. However, these algorithms have never been explicitly compared on equal footing with same models and datasets, making it difficult to assess their scalability and decide which one to select in practice. In this work, we carry out a comparison of seven learning algorithms, namely CL and different variants of EP and CpL depending on the signs of the perturbations. Specifically, using these learning algorithms, we train deep convolutional Hopfield networks (DCHNs) on five vision tasks (MNIST, F-MNIST, SVHN, CIFAR-10 and CIFAR-100). We find that, while all algorithms yield comparable performance on MNIST, important differences in performance arise as the difficulty of the task increases. Our key findings reveal that negative perturbations are better than positive ones, and highlight the centered variant of EP (which uses two perturbations of opposite sign) as the best-performing algorithm. We also endorse these findings with theoretical arguments. Additionally, we establish new SOTA results with DCHNs on all five datasets, both in performance and speed. In particular, our DCHN simulations are 13.5 times faster with respect to Laborieux et al. (2021), which we achieve thanks to the use of a novel energy minimisation algorithm based on asynchronous updates, combined with reduced precision (16 bits). | Energy-based learning algorithms for analog computing: a comparative study | [
"Benjamin Scellier",
"Maxence Ernoult",
"Jack Kendall",
"Suhas Kumar"
] | Conference | poster | 2312.15103 | [
"https://github.com/rain-neuromorphics/energy-based-learning"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jkPDRHff3s | @inproceedings{
mbacke2023statistical,
title={Statistical Guarantees for Variational Autoencoders using {PAC}-Bayesian Theory},
author={Sokhna Diarra Mbacke and Florence Clerc and Pascal Germain},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jkPDRHff3s}
} | Since their inception, Variational Autoencoders (VAEs) have become central in machine learning. Despite their widespread use, numerous questions regarding their theoretical properties remain open. Using PAC-Bayesian theory, this work develops statistical guarantees for VAEs. First, we derive the first PAC-Bayesian bound for posterior distributions conditioned on individual samples from the data-generating distribution. Then, we utilize this result to develop generalization guarantees for the VAE's reconstruction loss, as well as upper bounds on the distance between the input and the regenerated distributions. More importantly, we provide upper bounds on the Wasserstein distance between the input distribution and the distribution defined by the VAE's generative model. | Statistical Guarantees for Variational Autoencoders using PAC-Bayesian Theory | [
"Sokhna Diarra Mbacke",
"Florence Clerc",
"Pascal Germain"
] | Conference | spotlight | 2310.04935 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jhs8F63xI6 | @inproceedings{
zhou2023adaptive,
title={Adaptive Online Replanning with Diffusion Models},
author={Siyuan Zhou and Yilun Du and Shun Zhang and Mengdi Xu and Yikang Shen and Wei Xiao and Dit-Yan Yeung and Chuang Gan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jhs8F63xI6}
} | Diffusion models have risen a promising approach to data-driven planning, and have demonstrated impressive robotic control, reinforcement learning, and video planning performance. Given an effective planner, an important question to consider is replanning -- when given plans should be regenerated due to both action execution error and external environment changes. Direct plan execution, without replanning, is problematic as errors from individual actions rapidly accumulate and environments are partially observable and stochastic. Simultaneously, replanning at each timestep incurs a substantial computational cost, and may prevent successful task execution, as different generated plans prevent consistent progress to any particular goal. In this paper, we explore how we may effectively replan with diffusion models. We propose a principled approach to determine when to replan, based on the diffusion model's estimated likelihood of existing generated plans. We further present an approach to replan existing trajectories to ensure that new plans follow the same goal state as the original trajectory, which may efficiently bootstrap off previously generated plans. We illustrate how a combination of our proposed additions significantly improves the performance of diffusion planners leading to 38\% gains over past diffusion planning approaches on Maze2D and further enables handling of stochastic and long-horizon robotic control tasks. | Adaptive Online Replanning with Diffusion Models | [
"Siyuan Zhou",
"Yilun Du",
"Shun Zhang",
"Mengdi Xu",
"Yikang Shen",
"Wei Xiao",
"Dit-Yan Yeung",
"Chuang Gan"
] | Conference | poster | 2310.09629 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jh3UNSQK0l | @inproceedings{
chen2023finitetime,
title={Finite-Time Analysis of Single-Timescale Actor-Critic},
author={Xuyang Chen and Lin Zhao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jh3UNSQK0l}
} | Actor-critic methods have achieved significant success in many challenging applications. However, its finite-time convergence is still poorly understood in the most practical single-timescale form. Existing works on analyzing single-timescale actor-critic have been limited to i.i.d. sampling or tabular setting for simplicity. We investigate the more practical online single-timescale actor-critic algorithm on continuous state space, where the critic assumes linear function approximation and updates with a single Markovian sample per actor step. Previous analysis has been unable to establish the convergence for such a challenging scenario. We demonstrate that the online single-timescale actor-critic method provably finds an $\epsilon$-approximate stationary point with $\widetilde{\mathcal{O}}(\epsilon^{-2})$ sample complexity under standard assumptions, which can be further improved to $\mathcal{O}(\epsilon^{-2})$ under the i.i.d. sampling. Our novel framework systematically evaluates and controls the error propagation between the actor and critic. It offers a promising approach for analyzing other single-timescale reinforcement learning algorithms as well. | Finite-Time Analysis of Single-Timescale Actor-Critic | [
"Xuyang Chen",
"Lin Zhao"
] | Conference | poster | 2210.09921 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jgIrJeHHlz | @inproceedings{
hong2023debiasing,
title={Debiasing Scores and Prompts of 2D Diffusion for View-consistent Text-to-3D Generation},
author={Susung Hong and Donghoon Ahn and Seungryong Kim},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jgIrJeHHlz}
} | Existing score-distilling text-to-3D generation techniques, despite their considerable promise, often encounter the view inconsistency problem. One of the most notable issues is the Janus problem, where the most canonical view of an object (\textit{e.g}., face or head) appears in other views. In this work, we explore existing frameworks for score-distilling text-to-3D generation and identify the main causes of the view inconsistency problem---the embedded bias of 2D diffusion models. Based on these findings, we propose two approaches to debias the score-distillation frameworks for view-consistent text-to-3D generation. Our first approach, called score debiasing, involves cutting off the score estimated by 2D diffusion models and gradually increasing the truncation value throughout the optimization process. Our second approach, called prompt debiasing, identifies conflicting words between user prompts and view prompts using a language model, and adjusts the discrepancy between view prompts and the viewing direction of an object. Our experimental results show that our methods improve the realism of the generated 3D objects by significantly reducing artifacts and achieve a good trade-off between faithfulness to the 2D diffusion models and 3D consistency with little overhead. Our project page is available at~\url{https://susunghong.github.io/Debiased-Score-Distillation-Sampling/}. | Debiasing Scores and Prompts of 2D Diffusion for View-consistent Text-to-3D Generation | [
"Susung Hong",
"Donghoon Ahn",
"Seungryong Kim"
] | Conference | poster | 2303.15413 | [
"https://github.com/threestudio-project/threestudio"
] | https://huggingface.co/papers/2303.15413 | 1 | 0 | 0 | 3 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=jfsjKBDB1z | @inproceedings{
uziel2023from,
title={From ViT Features to Training-free Video Object Segmentation via Streaming-data Mixture Models},
author={Roy Uziel and Or Dinari and Oren Freifeld},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jfsjKBDB1z}
} | In the task of semi-supervised video object segmentation, the input is the binary mask of an object in the first frame, and the desired output consists of the corresponding masks of that object in the subsequent frames. Existing leading solutions have two main drawbacks: 1) an expensive and typically-supervised training on videos; 2) a large memory footprint during inference. Here we present a training-free solution, with a low-memory footprint, that yields state-of-the-art results. The proposed method combines pre-trained deep learning-based features (trained on still images) with more classical methods for streaming-data clustering. Designed to adapt to temporal concept drifts and generalize to diverse video content without relying on annotated images or videos, the method eliminates the need for additional training or fine-tuning, ensuring fast inference and immediate applicability to new videos. Concretely, we represent an object via a dynamic ensemble of temporally- and spatially-coherent mixtures over a representation built from pre-trained ViT features and positional embeddings. A convolutional conditional random field further improves spatial coherence and helps reject outliers. We demonstrate the efficacy of the method on key benchmarks: the DAVIS-2017 and YouTube-VOS 2018 validation datasets. Moreover, by the virtue of the low-memory footprint of the compact cluster-based representation, the method scales gracefully to high-resolution ViT features. Our code is available at https://github.com/BGU-CS-VIL/Training-Free-VOS | From ViT Features to Training-free Video Object Segmentation via Streaming-data Mixture Models | [
"Roy Uziel",
"Or Dinari",
"Oren Freifeld"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jcnvDO96N5 | @inproceedings{
mozaffari2023mkor,
title={{MKOR}: Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 Updates},
author={Mohammad Mozaffari and Sikan Li and Zhao Zhang and Maryam Mehri Dehnavi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jcnvDO96N5}
} | This work proposes a Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 updates, called MKOR, that improves the training time and convergence properties of deep neural networks (DNNs). Second-order techniques, while enjoying higher convergence rates vs first-order counterparts, have cubic complexity with respect to either the model size and/or the training batch size. Hence they exhibit poor scalability and performance in transformer models, e.g. large language models (LLMs), because the batch sizes in these models scale by the attention mechanism sequence length, leading to large model size and batch sizes. MKOR's complexity is quadratic with respect to the model size, alleviating the computation bottlenecks in second-order methods. Because of their high computation complexity, state-of-the-art implementations of second-order methods can only afford to update the second order information infrequently, and thus do not fully exploit the promise of better convergence from these updates. By reducing the communication complexity of the second-order updates as well as achieving a linear communication complexity, MKOR increases the frequency of second order updates. We also propose a hybrid version of MKOR (called MKOR-H) that mid-training falls backs to a first order optimizer if the second order updates no longer accelerate convergence. Our experiments show that MKOR outperforms state -of-the-art first order methods, e.g. the LAMB optimizer, and best implementations of second-order methods, i.e. KAISA/KFAC, up to 2.57x and 1.85x respectively on BERT-Large-Uncased on 64 GPUs. | MKOR: Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 Updates | [
"Mohammad Mozaffari",
"Sikan Li",
"Zhao Zhang",
"Maryam Mehri Dehnavi"
] | Conference | poster | 2306.01685 | [
"https://github.com/mohammad-mozaffari/mkor"
] | https://huggingface.co/papers/2306.01685 | 0 | 0 | 1 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=jcRB6xHdJ2 | @inproceedings{
liu2023interaction,
title={Interaction Measures, Partition Lattices and Kernel Tests for High-Order Interactions},
author={Zhaolu Liu and Robert Peach and Pedro A. M. Mediano and Mauricio Barahona},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jcRB6xHdJ2}
} | Models that rely solely on pairwise relationships often fail to capture the complete statistical structure of the complex multivariate data found in diverse domains, such as socio-economic, ecological, or biomedical systems. Non-trivial dependencies between groups of more than two variables can play a significant role in the analysis and modelling of such systems, yet extracting such high-order interactions from data remains challenging. Here, we introduce a hierarchy of $d$-order ($d \geq 2$) interaction measures, increasingly inclusive of possible factorisations of the joint probability distribution, and define non-parametric, kernel-based tests to establish systematically the statistical significance of $d$-order interactions. We also establish mathematical links with lattice theory, which elucidate the derivation of the interaction measures and their composite permutation tests; clarify the connection of simplicial complexes with kernel matrix centring; and provide a means to enhance computational efficiency. We illustrate our results numerically with validations on synthetic data, and through an application to neuroimaging data. | Interaction Measures, Partition Lattices and Kernel Tests for High-Order Interactions | [
"Zhaolu Liu",
"Robert Peach",
"Pedro A. M. Mediano",
"Mauricio Barahona"
] | Conference | poster | 2306.00904 | [
"https://github.com/barahona-research-group/streitberg-interaction"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jcJVgIFY2r | @inproceedings{
yu2023generator,
title={Generator Born from Classifier},
author={Runpeng Yu and Xinchao Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jcJVgIFY2r}
} | In this paper, we make a bold attempt toward an ambitious task: given a pre-trained classifier, we aim to reconstruct an image generator, without relying on any data samples. From a black-box perspective, this challenge seems intractable, since it inevitably involves identifying the inverse function for a classifier, which is, by nature, an information extraction process. As such, we resort to leveraging the knowledge encapsulated within the parameters of the neural network. Grounded on the theory of Maximum-Margin Bias of gradient descent, we propose a novel learning paradigm, in which the generator is trained to ensure that the convergence conditions of the network parameters are satisfied over the generated distribution of the samples. Empirical validation from various image generation tasks substantiates the efficacy of our strategy. | Generator Born from Classifier | [
"Runpeng Yu",
"Xinchao Wang"
] | Conference | poster | 2312.02470 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jZYf1GxH1V | @inproceedings{
liu2023design,
title={Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization},
author={Jinxin Liu and Hongyin Zhang and Zifeng Zhuang and Yachen Kang and Donglin Wang and Bin Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jZYf1GxH1V}
} | In this work, we decouple the iterative bi-level offline RL (value estimation and policy extraction) from the offline training phase, forming a non-iterative bi-level paradigm and avoiding the iterative error propagation over two levels. Specifically, this non-iterative paradigm allows us to conduct inner-level optimization (value estimation) in training, while performing outer-level optimization (policy extraction) in testing. Naturally, such a paradigm raises three core questions that are not fully answered by prior non-iterative offline RL counterparts like reward-conditioned policy: (q1) What information should we transfer from the inner-level to the outer-level? (q2) What should we pay attention to when exploiting the transferred information for safe/confident outer-level optimization? (q3) What are the benefits of concurrently conducting outer-level optimization during testing? Motivated by model-based optimization (MBO), we propose DROP (design from policies), which fully answers the above questions. Specifically, in the inner-level, DROP decomposes offline data into multiple subsets, and learns an MBO score model (a1). To keep safe exploitation to the score model in the outer-level, we explicitly learn a behavior embedding and introduce a conservative regularization (a2). During testing, we show that DROP permits deployment adaptation, enabling an adaptive inference across states (a3). Empirically, we evaluate DROP on various tasks, showing that DROP gains comparable or better performance compared to prior methods. | Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization | [
"Jinxin Liu",
"Hongyin Zhang",
"Zifeng Zhuang",
"Yachen Kang",
"Donglin Wang",
"Bin Wang"
] | Conference | poster | 2306.14479 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jYIknUIgkd | @inproceedings{
beckers2023moral,
title={Moral Responsibility for {AI} Systems},
author={Sander Beckers},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jYIknUIgkd}
} | As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of _moral responsibility_ that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a _causal condition_ and an _epistemic condition_: the action should cause the outcome, and the agent should have been aware - in some form or other - of the possible moral consequences of their action. This paper presents a formal definition of both conditions within the framework of causal models. I compare my approach to the existing approaches of Braham and van Hees (BvH) and of Halpern and Kleiman-Weiner (HK). I then generalize my definition into a _degree of responsibility_. | Moral Responsibility for AI Systems | [
"Sander Beckers"
] | Conference | poster | 2310.18040 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jX49iKr6vb | @inproceedings{
seligmann2023beyond,
title={Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift},
author={Florian Seligmann and Philipp Becker and Michael Volpp and Gerhard Neumann},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jX49iKr6vb}
} | Bayesian deep learning (BDL) is a promising approach to achieve well-calibrated predictions on distribution-shifted data. Nevertheless, there exists no large-scale survey that evaluates recent SOTA methods on diverse, realistic, and challenging benchmark tasks in a systematic manner. To provide a clear picture of the current state of BDL research, we evaluate modern BDL algorithms on real-world datasets from the WILDS collection containing challenging classification and regression tasks, with a focus on generalization capability and calibration under distribution shift. We compare the algorithms on a wide range of large, convolutional and transformer-based neural network architectures. In particular, we investigate a signed version of the expected calibration error that reveals whether the methods are over- or underconfident, providing further insight into the behavior of the methods. Further, we provide the first systematic evaluation of BDL for fine-tuning large pre-trained models, where training from scratch is prohibitively expensive. Finally, given the recent success of Deep Ensembles, we extend popular single-mode posterior approximations to multiple modes by the use of ensembles. While we find that ensembling single-mode approximations generally improves the generalization capability and calibration of the models by a significant margin, we also identify a failure mode of ensembles when finetuning large transformer-based language models.
In this setting, variational inference based approaches such as last-layer Bayes By Backprop outperform other methods in terms of accuracy by a large margin, while modern approximate inference algorithms such as SWAG achieve the best calibration. | Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift | [
"Florian Seligmann",
"Philipp Becker",
"Michael Volpp",
"Gerhard Neumann"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jUdZCcoOu3 | @inproceedings{
xue2023raphael,
title={{RAPHAEL}: Text-to-Image Generation via Large Mixture of Diffusion Paths},
author={Zeyue Xue and Guanglu Song and Qiushan Guo and Boxiao Liu and Zhuofan Zong and Yu Liu and Ping Luo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jUdZCcoOu3}
} | Text-to-image generation has recently witnessed remarkable achievements. We introduce a text-conditional image diffusion model, termed RAPHAEL, to generate highly artistic images, which accurately portray the text prompts, encompassing multiple nouns, adjectives, and verbs. This is achieved by stacking tens of mixture-of-experts (MoEs) layers, i.e., space-MoE and time-MoE layers, enabling billions of diffusion paths (routes) from the network input to the output. Each path intuitively functions as a "painter" for depicting a particular textual concept onto a specified image region at a diffusion timestep. Comprehensive experiments reveal that RAPHAEL outperforms recent cutting-edge models, such as Stable Diffusion, ERNIE-ViLG 2.0, DeepFloyd, and DALL-E 2, in terms of both image quality and aesthetic appeal. Firstly, RAPHAEL exhibits superior performance in switching images across diverse styles, such as Japanese comics, realism, cyberpunk, and ink illustration. Secondly, a single model with three billion parameters, trained on 1,000 A100 GPUs for two months, achieves a state-of-the-art zero-shot FID score of 6.61 on the COCO dataset. Furthermore, RAPHAEL significantly surpasses its counterparts in human evaluation on the ViLG-300 benchmark. We believe that RAPHAEL holds the potential to propel the frontiers of image generation research in both academia and industry, paving the way for future breakthroughs in this rapidly evolving field. More details can be found on a webpage: https://raphael-painter.github.io/. | RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths | [
"Zeyue Xue",
"Guanglu Song",
"Qiushan Guo",
"Boxiao Liu",
"Zhuofan Zong",
"Yu Liu",
"Ping Luo"
] | Conference | poster | 2305.18295 | [
""
] | https://huggingface.co/papers/2305.18295 | 3 | 7 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=jU9qiRMDtR | @inproceedings{
wu2023spring,
title={{SPRING}: Studying Papers and Reasoning to play Games},
author={Yue Wu and So Yeon Min and Shrimai Prabhumoye and Yonatan Bisk and Ruslan Salakhutdinov and Amos Azaria and Tom Mitchell and Yuanzhi Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jU9qiRMDtR}
} | Open-world survival games pose significant challenges for AI algorithms due to their multi-tasking, deep exploration, and goal prioritization requirements. Despite reinforcement learning (RL) being popular for solving games, its high sample complexity limits its effectiveness in complex open-world games like Crafter or Minecraft. We propose a novel approach, SPRING, to read Crafter's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM).
Prompted with the LaTeX source as game context and a description of the agent's current observation, our SPRING framework employs a directed acyclic graph (DAG) with game-related questions as nodes and dependencies as edges. We identify the optimal action to take in the environment by traversing the DAG and calculating LLM responses for each node in topological order, with the LLM's answer to final node directly translating to environment actions.
In our experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter environment. Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories. Quantitatively, SPRING with GPT-4 outperforms all state-of-the-art RL baselines, trained for 1M steps, without any training.
Finally, we show the potential of Crafter as a test bed for LLMs. Code at github.com/holmeswww/SPRING | SPRING: Studying Papers and Reasoning to play Games | [
"Yue Wu",
"So Yeon Min",
"Shrimai Prabhumoye",
"Yonatan Bisk",
"Ruslan Salakhutdinov",
"Amos Azaria",
"Tom Mitchell",
"Yuanzhi Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jSuhnO9QJv | @inproceedings{
moayeri2023spuriosity,
title={Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases},
author={Mazda Moayeri and Wenxiao Wang and Sahil Singla and Soheil Feizi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jSuhnO9QJv}
} | We present a simple but effective method to measure and mitigate model biases caused by reliance on spurious cues. Instead of requiring costly changes to one's data or model training, our method better utilizes the data one already has by sorting them. Specifically, we rank images within their classes based on spuriosity (the degree to which common spurious cues are present), proxied via deep neural features of an interpretable network. With spuriosity rankings, it is easy to identify minority subpopulations (i.e. low spuriosity images) and assess model bias as the gap in accuracy between high and low spuriosity images. One can even efficiently remove a model's bias at little cost to accuracy by finetuning its classification head on low spuriosity images, resulting in fairer treatment of samples regardless of spuriosity. We demonstrate our method on ImageNet, annotating $5000$ class-feature dependencies ($630$ of which we find to be spurious) and generating a dataset of $325k$ soft segmentations for these features along the way. Having computed spuriosity rankings via the identified spurious neural features, we assess biases for $89$ diverse models and find that class-wise biases are highly correlated across models. Our results suggest that model bias due to spurious feature reliance is influenced far more by what the model is trained on than how it is trained. | Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases | [
"Mazda Moayeri",
"Wenxiao Wang",
"Sahil Singla",
"Soheil Feizi"
] | Conference | spotlight | 2212.02648 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jS4DUGOtBD | @inproceedings{
theisen2023when,
title={When are ensembles really effective?},
author={Ryan Theisen and Hyunsuk Kim and Yaoqing Yang and Liam Hodgkinson and Michael W. Mahoney},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jS4DUGOtBD}
} | Ensembling has a long history in statistical data analysis, with many impactful applications.
However, in many modern machine learning settings, the benefits of ensembling are less ubiquitous and less obvious.
We study, both theoretically and empirically, the fundamental question of when ensembling yields significant performance improvements in classification tasks.
Theoretically, we prove new results relating the \emph{ensemble improvement rate} (a measure of how much ensembling decreases the error rate versus a single model, on a relative scale) to the \emph{disagreement-error ratio}.
We show that ensembling improves performance significantly whenever the disagreement rate is large relative to the average error rate; and that, conversely, one classifier is often enough whenever the disagreement rate is low relative to the average error rate.
On the way to proving these results, we derive, under a mild condition called \emph{competence}, improved upper and lower bounds on the average test error rate of the majority vote classifier.
To complement this theory, we study ensembling empirically in a variety of settings, verifying the predictions made by our theory, and identifying practical scenarios where ensembling does and does not result in large performance improvements.
Perhaps most notably, we demonstrate a distinct difference in behavior between interpolating models (popular in current practice) and non-interpolating models (such as tree-based methods, where ensembling is popular), demonstrating that ensembling helps considerably more in the latter case than in the former. | When are ensembles really effective? | [
"Ryan Theisen",
"Hyunsuk Kim",
"Yaoqing Yang",
"Liam Hodgkinson",
"Michael W. Mahoney"
] | Conference | poster | 2305.12313 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jRL6ErxMVB | @inproceedings{
ma2023learning,
title={Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning},
author={Guozheng Ma and Linrui Zhang and Haoyu Wang and Lu Li and Zilin Wang and Zhen Wang and Li Shen and Xueqian Wang and Dacheng Tao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jRL6ErxMVB}
} | Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
Notably, employing simple observation transformations alone can yield outstanding performance without extra auxiliary representation tasks or pre-trained encoders. However, it remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL. To investigate this issue and further explore the potential of DA, this work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy and provides the following insights and improvements: (1) For individual DA operations, we reveal that both ample spatial diversity and slight hardness are indispensable. Building on this finding, we introduce Random PadResize (Rand PR), a new DA operation that offers abundant spatial diversity with minimal hardness. (2) For multi-type DA fusion schemes, the increased DA hardness and unstable data distribution result in the current fusion schemes being unable to achieve higher sample efficiency than their corresponding individual operations. Taking the non-stationary nature of RL into account, we propose a RL-tailored multi-type DA fusion scheme called Cycling Augmentation (CycAug), which performs periodic cycles of different DA operations to increase type diversity while maintaining data distribution consistency. Extensive evaluations on the DeepMind Control suite and CARLA driving simulator demonstrate that our methods achieve superior sample efficiency compared with the prior state-of-the-art methods. | Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning | [
"Guozheng Ma",
"Linrui Zhang",
"Haoyu Wang",
"Lu Li",
"Zilin Wang",
"Zhen Wang",
"Li Shen",
"Xueqian Wang",
"Dacheng Tao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jR2FkqW6GB | @inproceedings{
brown2023is,
title={Is Learning in Games Good for the Learners?},
author={William Brown and Jon Schneider and Kiran Vodrahalli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jR2FkqW6GB}
} | We consider a number of questions related to tradeoffs between reward and regret in repeated gameplay between two agents. To facilitate this, we introduce a notion of generalized equilibrium which allows for asymmetric regret constraints, and yields polytopes of feasible values for each agent and pair of regret constraints, where we show that any such equilibrium is reachable by a pair of algorithms which maintain their regret guarantees against arbitrary opponents. As a central example, we highlight the case one agent is no-swap and the other's regret is unconstrained. We show that this captures an extension of Stackelberg equilibria with a matching optimal value, and that there exists a wide class of games where a player can significantly increase their utility by deviating from a no-swap-regret algorithm against a no-swap learner (in fact, almost any game without pure Nash equilibria is of this form). Additionally, we make use of generalized equilibria to consider tradeoffs in terms of the opponent's algorithm choice. We give a tight characterization for the maximal reward obtainable against some no-regret learner, yet we also show a class of games in which this is bounded away from the value obtainable against the class of common "mean-based" no-regret algorithms. Finally, we consider the question of learning reward-optimal strategies via repeated play with a no-regret agent when the game is initially unknown. Again we show tradeoffs depending on the opponent's learning algorithm: the Stackelberg strategy is learnable in exponential time with any no-regret agent (and in polynomial time with any no-adaptive-regret agent) for any game where it is learnable via queries, and there are games where it is learnable in polynomial time against any no-swap-regret agent but requires exponential time against a mean-based no-regret agent. | Is Learning in Games Good for the Learners? | [
"William Brown",
"Jon Schneider",
"Kiran Vodrahalli"
] | Conference | spotlight | 2305.19496 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jOuxQGRVoQ | @inproceedings{
shao2023iebins,
title={{IEB}ins: Iterative Elastic Bins for Monocular Depth Estimation},
author={Shuwei Shao and Zhongcai Pei and Xingming Wu and Zhong Liu and Weihai Chen and Zhengguo Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jOuxQGRVoQ}
} | Monocular depth estimation (MDE) is a fundamental topic of geometric computer vision and a core technique for many downstream applications. Recently, several methods reframe the MDE as a classification-regression problem where a linear combination of probabilistic distribution and bin centers is used to predict depth. In this paper, we propose a novel concept of iterative elastic bins (IEBins) for the classification-regression-based MDE. The proposed IEBins aims to search for high-quality depth by progressively optimizing the search range, which involves multiple stages and each stage performs a finer-grained depth search in the target bin on top of its previous stage. To alleviate the possible error accumulation during the iterative process, we utilize a novel elastic target bin to replace the original target bin, the width of which is adjusted elastically based on the depth uncertainty. Furthermore, we develop a dedicated framework composed of a feature extractor and an iterative optimizer that has powerful temporal context modeling capabilities benefiting from the GRU-based architecture. Extensive experiments on the KITTI, NYU-Depth-v2 and SUN RGB-D datasets demonstrate that the proposed method surpasses prior state-of-the-art competitors. The source code is publicly available at https://github.com/ShuweiShao/IEBins. | IEBins: Iterative Elastic Bins for Monocular Depth Estimation | [
"Shuwei Shao",
"Zhongcai Pei",
"Xingming Wu",
"Zhong Liu",
"Weihai Chen",
"Zhengguo Li"
] | Conference | poster | 2309.14137 | [
"https://github.com/shuweishao/iebins"
] | https://huggingface.co/papers/2309.14137 | 0 | 0 | 0 | 6 | 1 | [] | [] | [
"umuthopeyildirim/IEBins-Depth-Estimation"
] |
null | https://openreview.net/forum?id=jL2eJxPK88 | @inproceedings{
sato2023fast,
title={Fast Partitioned Learned Bloom Filter},
author={Atsuki Sato and Yusuke Matsui},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jL2eJxPK88}
} | A Bloom filter is a memory-efficient data structure for approximate membership queries used in numerous fields of computer science.
Recently, learned Bloom filters that achieve better memory efficiency using machine learning models have attracted attention.
One such filter, the partitioned learned Bloom filter (PLBF), achieves excellent memory efficiency.
However, PLBF requires a $\mathcal{O}(N^3k)$ time complexity to construct the data structure, where $N$ and $k$ are the hyperparameters of PLBF.
One can improve memory efficiency by increasing $N$, but the construction time becomes extremely long.
Thus, we propose two methods that can reduce the construction time while maintaining the memory efficiency of PLBF.
First, we propose fast PLBF, which can construct the same data structure as PLBF with a smaller time complexity $\mathcal{O}(N^2k)$.
Second, we propose fast PLBF++, which can construct the data structure with even smaller time complexity $\mathcal{O}(Nk\log N + Nk^2)$.
Fast PLBF++ does not necessarily construct the same data structure as PLBF.
Still, it is almost as memory efficient as PLBF, and it is proved that fast PLBF++ has the same data structure as PLBF when the distribution satisfies a certain constraint.
Our experimental results from real-world datasets show that (i) fast PLBF and fast PLBF++ can construct the data structure up to 233 and 761 times faster than PLBF, (ii) fast PLBF can achieve the same memory efficiency as PLBF, and (iii) fast PLBF++ can achieve almost the same memory efficiency as PLBF.
The codes are available at [this https URL](https://github.com/atsukisato/FastPLBF). | Fast Partitioned Learned Bloom Filter | [
"Atsuki Sato",
"Yusuke Matsui"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jIhX7SpfCz | @inproceedings{
wang2023club,
title={CluB: Cluster Meets {BEV} for Li{DAR}-Based 3D Object Detection},
author={Yingjie Wang and Jiajun Deng and Yuenan Hou and Yao Li and Yu Zhang and Jianmin Ji and Wanli Ouyang and Yanyong Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jIhX7SpfCz}
} | Currently, LiDAR-based 3D detectors are broadly categorized into two groups, namely, BEV-based detectors and cluster-based detectors.
BEV-based detectors capture the contextual information from the Bird's Eye View (BEV) and fill their center voxels via feature diffusion with a stack of convolution layers, which, however, weakens the capability of presenting an object with the center point.
On the other hand, cluster-based detectors exploit the voting mechanism and aggregate the foreground points into object-centric clusters for further prediction.
In this paper, we explore how to effectively combine these two complementary representations into a unified framework.
Specifically, we propose a new 3D object detection framework, referred to as CluB, which incorporates an auxiliary cluster-based branch into the BEV-based detector by enriching the object representation at both feature and query levels.
Technically, CluB is comprised of two steps.
First, we construct a cluster feature diffusion module to establish the association between cluster features and BEV features in a subtle and adaptive fashion.
Based on that, an imitation loss is introduced to distill object-centric knowledge from the cluster features to the BEV features.
Second, we design a cluster query generation module to leverage the voting centers directly from the cluster branch, thus enriching the diversity of object queries.
Meanwhile, a direction loss is employed to encourage a more accurate voting center for each cluster.
Extensive experiments are conducted on Waymo and nuScenes datasets, and our CluB achieves state-of-the-art performance on both benchmarks. | CluB: Cluster Meets BEV for LiDAR-Based 3D Object Detection | [
"Yingjie Wang",
"Jiajun Deng",
"Yuenan Hou",
"Yao Li",
"Yu Zhang",
"Jianmin Ji",
"Wanli Ouyang",
"Yanyong Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jEQRoJzDx8 | @inproceedings{
engelken2023gradient,
title={Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians},
author={Rainer Engelken},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jEQRoJzDx8}
} | Training recurrent neural networks (RNNs) remains a challenge due to the instability of gradients across long time horizons, which can lead to exploding and vanishing gradients. Recent research has linked these problems to the values of Lyapunov exponents for the forward-dynamics, which describe the growth or shrinkage of infinitesimal perturbations. Here, we propose gradient flossing, a novel approach to tackling gradient instability by pushing Lyapunov exponents of the forward dynamics toward zero during learning. We achieve this by regularizing Lyapunov exponents through backpropagation using differentiable linear algebra. This enables us to "floss" the gradients, stabilizing them and thus improving network training. We show that gradient flossing controls not only the gradient norm but also the condition number of the long-term Jacobian, facilitating multidimensional error feedback propagation. We find that applying gradient flossing before training enhances both the success rate and convergence speed for tasks involving long time horizons.
For challenging tasks, we show that gradient flossing during training can further increase the time horizon that can be bridged by backpropagation through time. Moreover, we demonstrate the effectiveness of our approach on various RNN architectures and tasks of variable temporal complexity. Additionally, we provide a simple implementation of our gradient flossing algorithm that can be used in practice. Our results indicate that gradient flossing via regularizing Lyapunov exponents can significantly enhance the effectiveness of RNN training and mitigate the exploding and vanishing gradients problem. | Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians | [
"Rainer Engelken"
] | Conference | poster | 2312.17306 | [
"https://github.com/rainerengelken/gradientflossing"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jDIlzSU8wJ | @inproceedings{
saxena2023the,
title={The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation},
author={Saurabh Saxena and Charles Herrmann and Junhwa Hur and Abhishek Kar and Mohammad Norouzi and Deqing Sun and David J. Fleet},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jDIlzSU8wJ}
} | Denoising diffusion probabilistic models have transformed image generation with their impressive fidelity and diversity.
We show that they also excel in estimating optical flow and monocular depth, surprisingly without task-specific architectures and loss functions that are predominant for these tasks.
Compared to the point estimates of conventional regression-based methods, diffusion models also enable Monte Carlo inference, e.g., capturing uncertainty and ambiguity in flow and depth.
With self-supervised pre-training, the combined use of synthetic and real data for supervised training, and technical innovations (infilling and step-unrolled denoising diffusion training) to handle noisy-incomplete training data, one can train state-of-the-art diffusion models for depth and optical flow estimation, with additional zero-shot coarse-to-fine refinement for high resolution estimates.
Extensive experiments focus on quantitative performance against benchmarks, ablations, and the model's ability to capture uncertainty and multimodality, and impute missing values. Our model obtains a state-of-the-art relative depth error of 0.074 on the indoor NYU benchmark and an Fl-all score of 3.26\% on the KITTI optical flow benchmark, about 25\% better than the best published method. | The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation | [
"Saurabh Saxena",
"Charles Herrmann",
"Junhwa Hur",
"Abhishek Kar",
"Mohammad Norouzi",
"Deqing Sun",
"David J. Fleet"
] | Conference | oral | 2306.01923 | [
""
] | https://huggingface.co/papers/2306.01923 | 2 | 2 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=jCPRG3FuHV | @inproceedings{
zhang2023learning,
title={Learning Repeatable Speech Embeddings Using An Intra-class Correlation Regularizer},
author={Jianwei Zhang and Suren Jayasuriya and Visar Berisha},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jCPRG3FuHV}
} | A good supervised embedding for a specific machine learning task is only sensitive to changes in the label of interest and is invariant to other confounding factors. We leverage the concept of repeatability from measurement theory to describe this property and propose to use the intra-class correlation coefficient (ICC) to evaluate the repeatability of embeddings. We then propose a novel regularizer, the ICC regularizer, as a complementary component for contrastive losses to guide deep neural networks to produce embeddings with higher repeatability. We use simulated data to explain why the ICC regularizer works better on minimizing the intra-class variance than the contrastive loss alone. We implement the ICC regularizer and apply it to three speech tasks: speaker verification, voice style conversion, and a clinical application for detecting dysphonic voice. The experimental results demonstrate that adding an ICC regularizer can improve the repeatability of learned embeddings compared to only using the contrastive loss; further, these embeddings lead to improved performance in these downstream tasks. | Learning Repeatable Speech Embeddings Using An Intra-class Correlation Regularizer | [
"Jianwei Zhang",
"Suren Jayasuriya",
"Visar Berisha"
] | Conference | poster | 2310.17049 | [
"https://github.com/vigor-jzhang/icc-regularizer"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jB4wsc1DQW | @inproceedings{
huang2023hierarchical,
title={Hierarchical Adaptive Value Estimation for Multi-modal Visual Reinforcement Learning},
author={Yangru Huang and Peixi Peng and Yifan Zhao and Haoran Xu and Mengyue Geng and Yonghong Tian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jB4wsc1DQW}
} | Integrating RGB frames with alternative modality inputs is gaining increasing traction in many vision-based reinforcement learning (RL) applications. Existing multi-modal vision-based RL methods usually follow a Global Value Estimation (GVE) pipeline, which uses a fused modality feature to obtain a unified global environmental description. However, such a feature-level fusion paradigm with a single critic may fall short in policy learning as it tends to overlook the distinct values of each modality. To remedy this, this paper proposes a Local modality-customized Value Estimation (LVE) paradigm, which dynamically estimates the contribution and adjusts the importance weight of each modality from a value-level perspective. Furthermore, a task-contextual re-fusion process is developed to achieve a task-level re-balance of estimations from both feature and value levels. To this end, a Hierarchical Adaptive Value Estimation (HAVE) framework is formed, which adaptively coordinates the contributions of individual modalities as well as their collective efficacy. Agents trained by HAVE are able to exploit the unique characteristics of various modalities while capturing their intricate interactions, achieving substantially improved performance. We specifically highlight the potency of our approach within the challenging landscape of autonomous driving, utilizing the CARLA benchmark with neuromorphic event and depth data to demonstrate HAVE's capability and the effectiveness of its distinct components. | Hierarchical Adaptive Value Estimation for Multi-modal Visual Reinforcement Learning | [
"Yangru Huang",
"Peixi Peng",
"Yifan Zhao",
"Haoran Xu",
"Mengyue Geng",
"Yonghong Tian"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=jA235JGM09 | @inproceedings{
wei2023jailbroken,
title={Jailbroken: How Does {LLM} Safety Training Fail?},
author={Alexander Wei and Nika Haghtalab and Jacob Steinhardt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=jA235JGM09}
} | Large language models trained for safety and harmlessness remain susceptible to adversarial misuse, as evidenced by the prevalence of “jailbreak” attacks on early releases of ChatGPT that elicit undesired behavior. Going beyond recognition of the issue, we investigate why such attacks succeed and how they can be created. We hypothesize two failure modes of safety training: competing objectives and mismatched generalization. Competing objectives arise when a model’s capabilities and safety goals conflict, while mismatched generalization occurs when safety training fails to generalize to a domain for which capabilities exist. We use these failure modes to guide jailbreak design and then evaluate state-of-the-art models, including OpenAI’s GPT-4 and Anthropic’s Claude v1.3, against both existing and newly designed attacks. We find that vulnerabilities persist despite the extensive red-teaming and safety-training efforts behind these models. Notably, new attacks utilizing our failure modes succeed on every prompt in a collection of unsafe requests from the models’ red-teaming evaluation sets and outperform existing ad hoc jailbreaks. Our analysis emphasizes the need for safety-capability parity—that safety mechanisms should be as sophisticated as the underlying model—and argues against the idea that scaling alone can resolve these safety failure modes. | Jailbroken: How Does LLM Safety Training Fail? | [
"Alexander Wei",
"Nika Haghtalab",
"Jacob Steinhardt"
] | Conference | oral | 2307.02483 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=j9wGUcS30B | @inproceedings{
moreno-mu{\~n}oz2023on,
title={On Masked Pre-training and the Marginal Likelihood},
author={Pablo Moreno-Mu{\~n}oz and Pol G. Recasens and S{\o}ren Hauberg},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=j9wGUcS30B}
} | Masked pre-training removes random input dimensions and learns a model that can predict the missing values. Empirical results indicate that this intuitive form of self-supervised learning yields models that generalize very well to new domains. A theoretical understanding is, however, lacking. This paper shows that masked pre-training with a suitable cumulative scoring function corresponds to maximizing the model's marginal likelihood, which is de facto the Bayesian model selection measure of generalization. Beyond shedding light on the success of masked pre-training, this insight also suggests that Bayesian models can be trained with appropriately designed self-supervision. Empirically, we confirm the developed theory and explore the main learning principles of masked pre-training in large language models. | On Masked Pre-training and the Marginal Likelihood | [
"Pablo Moreno-Muñoz",
"Pol G. Recasens",
"Søren Hauberg"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=j7x9wW3tCf | @inproceedings{
qi2023learning,
title={Learning from Both Structural and Textual Knowledge for Inductive Knowledge Graph Completion},
author={Kunxun Qi and Jianfeng Du and Hai Wan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=j7x9wW3tCf}
} | Learning rule-based systems plays a pivotal role in knowledge graph completion (KGC). Existing rule-based systems restrict the input of the system to structural knowledge only, which may omit some useful knowledge for reasoning, e.g., textual knowledge. In this paper, we propose a two-stage framework that imposes both structural and textual knowledge to learn rule-based systems. In the first stage, we compute a set of triples with confidence scores (called \emph{soft triples}) from a text corpus by distant supervision, where a textual entailment model with multi-instance learning is exploited to estimate whether a given triple is entailed by a set of sentences. In the second stage, these soft triples are used to learn a rule-based model for KGC. To mitigate the negative impact of noise from soft triples, we propose a new formalism for rules to be learnt, named \emph{text enhanced rules} or \emph{TE-rules} for short. To effectively learn TE-rules, we propose a neural model that simulates the inference of TE-rules. We theoretically show that any set of TE-rules can always be interpreted by a certain parameter assignment of the neural model. We introduce three new datasets to evaluate the effectiveness of our method. Experimental results demonstrate that the introduction of soft triples and TE-rules results in significant performance improvements in inductive link prediction. | Learning from Both Structural and Textual Knowledge for Inductive Knowledge Graph Completion | [
"Kunxun Qi",
"Jianfeng Du",
"Hai Wan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=j7U4pFkCYB | @inproceedings{
zhou2023dynpoint,
title={DynPoint: Dynamic Neural Point For View Synthesis},
author={Kaichen Zhou and Jia-Xing Zhong and Sangyun Shin and Kai Lu and Yiyuan Yang and Andrew Markham and Niki Trigoni},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=j7U4pFkCYB}
} | The introduction of neural radiance fields has greatly improved the effectiveness of view synthesis for monocular videos. However, existing algorithms face difficulties when dealing with uncontrolled or lengthy scenarios, and require extensive training time specific to each new scenario.
To tackle these limitations, we propose DynPoint, an algorithm designed to facilitate the rapid synthesis of novel views for unconstrained monocular videos.
Rather than encoding the entirety of the scenario information into a latent representation, DynPoint concentrates on predicting the explicit 3D correspondence between neighboring frames to realize information aggregation.
Specifically, this correspondence prediction is achieved through the estimation of consistent depth and scene flow information across frames.
Subsequently, the acquired correspondence is utilized to aggregate information from multiple reference frames to a target frame, by constructing hierarchical neural point clouds.
The resulting framework enables swift and accurate view synthesis for desired views of target frames.
The experimental results obtained demonstrate the considerable acceleration of training time achieved - typically an order of magnitude - by our proposed method while yielding comparable outcomes compared to prior approaches. Furthermore, our method exhibits strong robustness in handling long-duration videos without learning a canonical representation of video content. | DynPoint: Dynamic Neural Point For View Synthesis | [
"Kaichen Zhou",
"Jia-Xing Zhong",
"Sangyun Shin",
"Kai Lu",
"Yiyuan Yang",
"Andrew Markham",
"Niki Trigoni"
] | Conference | poster | 2310.18999 | [
"https://github.com/kaichen-z/dynpoint"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=j5BuTrEj35 | @inproceedings{
muennighoff2023scaling,
title={Scaling Data-Constrained Language Models},
author={Niklas Muennighoff and Alexander M Rush and Boaz Barak and Teven Le Scao and Nouamane Tazi and Aleksandra Piktus and Sampo Pyysalo and Thomas Wolf and Colin Raffel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=j5BuTrEj35}
} | The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations. | Scaling Data-Constrained Language Models | [
"Niklas Muennighoff",
"Alexander M Rush",
"Boaz Barak",
"Teven Le Scao",
"Nouamane Tazi",
"Aleksandra Piktus",
"Sampo Pyysalo",
"Thomas Wolf",
"Colin Raffel"
] | Conference | oral | 2305.16264 | [
"https://github.com/huggingface/datablations"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=j5AoleAIru | @inproceedings{
yarom2023what,
title={What You See is What You Read? Improving Text-Image Alignment Evaluation},
author={Michal Yarom and Yonatan Bitton and Soravit Changpinyo and Roee Aharoni and Jonathan Herzig and Oran Lang and Eran Ofek and Idan Szpektor},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=j5AoleAIru}
} | Automatically determining whether a text and a corresponding image are semantically aligned is a significant challenge for vision-language models, with applications in generative text-to-image and image-to-text tasks. In this work, we study methods for automatic text-image alignment evaluation. We first introduce SeeTRUE: a comprehensive evaluation set, spanning multiple datasets from both text-to-image and image-to-text generation tasks, with human judgements for whether a given text-image pair is semantically aligned. We then describe two automatic methods to determine alignment: the first involving a pipeline based on question generation and visual question answering models, and the second employing an end-to-end classification approach by finetuning multimodal pretrained models. Both methods surpass prior approaches in various text-image alignment tasks, with significant improvements in challenging cases that involve complex composition or unnatural images. Finally, we demonstrate how our approaches can localize specific misalignments between an image and a given text, and how they can be used to automatically re-rank candidates in text-to-image generation. | What You See is What You Read? Improving Text-Image Alignment Evaluation | [
"Michal Yarom",
"Yonatan Bitton",
"Soravit Changpinyo",
"Roee Aharoni",
"Jonathan Herzig",
"Oran Lang",
"Eran Ofek",
"Idan Szpektor"
] | Conference | poster | 2305.10400 | [
"https://github.com/yonatanbitton/wysiwyr"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=j4QVhftpYM | @inproceedings{
li2023resolving,
title={Resolving the Tug-of-War: A Separation of Communication and Learning in Federated Learning},
author={Junyi Li and Heng Huang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=j4QVhftpYM}
} | Federated learning (FL) is a promising privacy-preserving machine learning paradigm over distributed data. In this paradigm, each client trains the parameter of a model locally and the server aggregates the parameter from clients periodically. Therefore, we perform the learning and communication over the same set of parameters. However, we find that learning and communication have fundamentally divergent requirements for parameter selection, akin to two opposite teams in a tug-of-war game. To mitigate this discrepancy, we introduce FedSep, a novel two-layer federated learning framework. FedSep consists of separated communication and learning layers for each client and the two layers are connected through decode/encode operations. In particular, the decoding operation is formulated as a minimization problem. We view FedSep as a federated bilevel optimization problem and propose an efficient algorithm to solve it. Theoretically, we demonstrate that its convergence matches that of the standard FL algorithms. The separation of communication and learning in FedSep offers innovative solutions to various challenging problems in FL, such as Communication-Efficient FL and Heterogeneous-Model FL. Empirical validation shows the superior performance of FedSep over various baselines in these tasks. | Resolving the Tug-of-War: A Separation of Communication and Learning in Federated Learning | [
"Junyi Li",
"Heng Huang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=j2oYaFpbrB | @inproceedings{
shang2023active,
title={Active Vision Reinforcement Learning under Limited Visual Observability},
author={Jinghuan Shang and Michael S Ryoo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=j2oYaFpbrB}
} | In this work, we investigate Active Vision Reinforcement Learning (ActiveVision-RL), where an embodied agent simultaneously learns action policy for the task while also controlling its visual observations in partially observable environments. We denote the former as motor policy and the latter as sensory policy. For example, humans solve real world tasks by hand manipulation (motor policy) together with eye movements (sensory policy). ActiveVision-RL poses challenges on coordinating two policies given their mutual influence. We propose SUGARL, Sensorimotor Understanding Guided Active Reinforcement Learning, a framework that models motor and sensory policies separately, but jointly learns them using with an intrinsic sensorimotor reward. This learnable reward is assigned by sensorimotor reward module, incentivizes the sensory policy to select observations that are optimal to infer its own motor action, inspired by the sensorimotor stage of humans. Through a series of experiments, we show the effectiveness of our method across a range of observability conditions and its adaptability to existed RL algorithms. The sensory policies learned through our method are observed to exhibit effective active vision strategies. | Active Vision Reinforcement Learning under Limited Visual Observability | [
"Jinghuan Shang",
"Michael S Ryoo"
] | Conference | poster | 2306.00975 | [
"https://github.com/elicassion/sugarl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=j0U6XJubbP | @inproceedings{
cheng2023versatile,
title={Versatile Energy-Based Probabilistic Models for High Energy Physics},
author={Taoli Cheng and Aaron Courville},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=j0U6XJubbP}
} | As a classical generative modeling approach, energy-based models have the natural advantage of flexibility in the form of the energy function. Recently, energy-based models have achieved great success in modeling high-dimensional data in computer vision and natural language processing. In line with these advancements, we build a multi-purpose energy-based probabilistic model for High Energy Physics events at the Large Hadron Collider. This framework builds on a powerful generative model and describes higher-order inter-particle interactions. It suits different encoding architectures and builds on implicit generation. As for applicative aspects, it can serve as a powerful parameterized event generator for physics simulation, a generic anomalous signal detector free from spurious correlations, and an augmented event classifier for particle identification. | Versatile Energy-Based Probabilistic Models for High Energy Physics | [
"Taoli Cheng",
"Aaron Courville"
] | Conference | poster | 2302.00695 | [
"https://github.com/taolicheng/ebm-hep"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=izNfcaHJk0 | @inproceedings{
chen2023privacy,
title={Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation},
author={Wei-Ning Chen and Dan Song and Ayfer Ozgur and Peter Kairouz},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=izNfcaHJk0}
} | Privacy and communication constraints are two major bottlenecks in federated learning (FL) and analytics (FA). We study the optimal accuracy of mean and frequency estimation (canonical models for FL and FA respectively) under joint communication and $(\varepsilon, \delta)$-differential privacy (DP) constraints. We consider both the central and the multi-message shuffled DP models. We show that in order to achieve the optimal $\ell_2$ error under $(\varepsilon, \delta)$-DP, it is sufficient for each client to send $\Theta\left( n \min\left(\varepsilon, \varepsilon^2\right)\right)$ bits for FL %{\color{blue}(assuming the dimension $d \gg n \min\left(\varepsilon, \varepsilon^2\right)$)}
and $\Theta\left(\log\left( n\min\left(\varepsilon, \varepsilon^2\right) \right)\right)$ bits for FA to the server, where $n$ is the number of participating clients. Without compression, each client needs $O(d)$ bits and $O\left(\log d\right)$ bits for the mean and frequency estimation problems respectively (where $d$ corresponds to the number of trainable parameters in FL or the domain size in FA), meaning that we can get significant savings in the regime $ n \min\left(\varepsilon, \varepsilon^2\right) = o(d)$, which is often the relevant regime in practice.
We propose two different ways to leverage compression for privacy amplification and achieve the optimal privacy-communication-accuracy trade-offs. In both cases, each client communicates only partial information about its sample and we show that privacy is amplified by randomly selecting the part contributed by each client. In the first method, the random selection is revealed to the server, which results in a central DP guarantee with optimal privacy-communication-accuracy trade-offs. In the second method, the random data parts from the clients are shuffled by a secure shuffler resulting in a multi-message shuffling scheme with the same optimal trade-offs. As a result, we establish the optimal three-way trade-offs between privacy, communication, and accuracy for both the central DP and multi-message shuffling frameworks. | Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation | [
"Wei-Ning Chen",
"Dan Song",
"Ayfer Ozgur",
"Peter Kairouz"
] | Conference | poster | 2304.01541 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iyweRIXAeH | @inproceedings{
diakonikolas2023nearoptimal,
title={Near-Optimal Algorithms for Gaussians with Huber Contamination: Mean Estimation and Linear Regression},
author={Ilias Diakonikolas and Daniel Kane and Ankit Pensia and Thanasis Pittas},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iyweRIXAeH}
} | We study the fundamental problems of Gaussian mean
estimation and linear regression with Gaussian covariates
in the presence of Huber contamination. Our main
contribution is the design of the first sample near-optimal
and almost linear-time algorithms with optimal error
guarantees for both these problems. Specifically, for
Gaussian robust mean estimation on $\mathbb R^d$ with
contamination parameter $\epsilon \in (0, \epsilon_0)$ for a small
absolute constant $\epsilon_0$, we give an
algorithm with sample complexity $n = \tilde{O}(d/\epsilon^2)$
and almost linear runtime that approximates the target
mean within $\ell_2$-error $O(\epsilon)$.
This improves on
prior work that achieved this error guarantee with
polynomially suboptimal sample and time complexity.
For robust linear
regression, we give the first algorithm with sample
complexity $n = \tilde{O}(d/\epsilon^2)$ and almost linear
runtime that approximates the target regressor within
$\ell_2$-error $O(\epsilon)$. This is the first polynomial
sample and time algorithm achieving the optimal error
guarantee, answering an open question in the literature.
At the technical level, we develop a methodology that
yields almost-linear time algorithms for multi-directional
filtering that may be of broader interest. | Near-Optimal Algorithms for Gaussians with Huber Contamination: Mean Estimation and Linear Regression | [
"Ilias Diakonikolas",
"Daniel Kane",
"Ankit Pensia",
"Thanasis Pittas"
] | Conference | poster | 2312.01547 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iy4Of0w8ML | @inproceedings{
akbarnejad2023gpex,
title={{GPEX}, A Framework For Interpreting Artificial Neural Networks},
author={Amir Akbarnejad and Gilbert Bigras and Nilanjan Ray},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iy4Of0w8ML}
} | The analogy between Gaussian processes (GPs) and deep artificial neural networks (ANNs) has received a lot of interest, and has shown promise to unbox the blackbox of deep ANNs. Existing theoretical works put strict assumptions on the ANN (e.g. requiring all intermediate layers to be wide, or using specific activation functions). Accommodating those theoretical assumptions is hard in recent deep architectures, and those theoretical conditions need refinement as new deep architectures emerge. In this paper we derive an evidence lower-bound that encourages the GP's posterior to match the ANN's output without any requirement on the ANN. Using our method we find out that on 5 datasets, only a subset of those theoretical assumptions are sufficient. Indeed, in our experiments we used a normal ResNet-18 or feed-forward backbone with a single wide layer in the end. One limitation of training GPs is the lack of scalability with respect to the number of inducing points. We use novel computational techniques that allow us to train GPs with hundreds of thousands of inducing points and with GPU acceleration. As shown in our experiments, doing so has been essential to get a close match between the GPs and the ANNs on 5 datasets. We implement our method as a publicly available tool called GPEX: https://github.com/amirakbarnejad/gpex. On 5 datasets (4 image datasets, and 1 biological dataset) and ANNs with 2 types of functionality (classifier or attention-mechanism) we were able to find GPs whose outputs closely match those of the corresponding ANNs. After matching the GPs to the ANNs, we used the GPs' kernel functions to explain the ANNs' decisions. We provide more than 200 explanations (around 30 in the paper and the rest in the supplementary) which are highly interpretable by humans and show the ability of the obtained GPs to unbox the ANNs' decisions. | GPEX, A Framework For Interpreting Artificial Neural Networks | [
"Amir Akbarnejad",
"Gilbert Bigras",
"Nilanjan Ray"
] | Conference | poster | 2112.09820 | [
"https://github.com/amirakbarnejad/gpex"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ixcsBZw5pl | @inproceedings{
issa2023nonadversarial,
title={Non-adversarial training of Neural {SDE}s with signature kernel scores},
author={Zacharia Issa and Blanka Horvath and Maud Lemercier and Cristopher Salvi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ixcsBZw5pl}
} | Neural SDEs are continuous-time generative models for sequential data. State-of-the-art performance for irregular time series generation has been previously obtained by training these models adversarially as GANs. However, as typical for GAN architectures, training is notoriously unstable, often suffers from mode collapse, and requires specialised techniques such as weight clipping and gradient penalty to mitigate these issues. In this paper, we introduce a novel class of scoring rules on pathspace based on signature kernels and use them as objective for training Neural SDEs non-adversarially. By showing strict properness of such kernel scores and consistency of the corresponding estimators, we provide existence and uniqueness guarantees for the minimiser. With this formulation, evaluating the generator-discriminator pair amounts to solving a system of linear path-dependent PDEs which allows for memory-efficient adjoint-based backpropagation. Moreover, because the proposed kernel scores are well-defined for paths with values in infinite dimensional spaces of functions, our framework can be easily extended to generate spatiotemporal data. Our procedure significantly outperforms alternative ways of training Neural SDEs on a variety of tasks including the simulation of rough volatility models, the conditional probabilistic forecasts of real-world forex pairs where the conditioning variable is an observed past trajectory, and the mesh-free generation of limit order book dynamics. | Non-adversarial training of Neural SDEs with signature kernel scores | [
"Zacharia Issa",
"Blanka Horvath",
"Maud Lemercier",
"Cristopher Salvi"
] | Conference | poster | 2305.16274 | [
"https://github.com/issaz/sigker-nsdes"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ixVAXsdtJO | @inproceedings{
cui2023open,
title={Open Visual Knowledge Extraction via Relation-Oriented Multimodality Model Prompting},
author={Hejie Cui and Xinyu Fang and Zihan Zhang and Ran Xu and Xuan Kan and Xin Liu and Yue Yu and Manling Li and Yangqiu Song and Carl Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ixVAXsdtJO}
} | Images contain rich relational knowledge that can help machines understand the world. Existing methods on visual knowledge extraction often rely on the pre-defined format (e.g., sub-verb-obj tuples) or vocabulary (e.g., relation types), restricting the expressiveness of the extracted knowledge. In this work, we take a first exploration to a new paradigm of open visual knowledge extraction. To achieve this, we present OpenVik which consists of an open relational region detector to detect regions potentially containing relational knowledge and a visual knowledge generator that generates format-free knowledge by prompting the large multimodality model with the detected region of interest. We also explore two data enhancement techniques for diversifying the generated format-free visual knowledge. Extensive knowledge quality evaluations highlight the correctness and uniqueness of the extracted open visual knowledge by OpenVik. Moreover, integrating our extracted knowledge across various visual reasoning applications shows consistent improvements, indicating the real-world applicability of OpenVik. | Open Visual Knowledge Extraction via Relation-Oriented Multimodality Model Prompting | [
"Hejie Cui",
"Xinyu Fang",
"Zihan Zhang",
"Ran Xu",
"Xuan Kan",
"Xin Liu",
"Yue Yu",
"Manling Li",
"Yangqiu Song",
"Carl Yang"
] | Conference | poster | 2310.18804 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iwp3H8uSeK | @inproceedings{
zhou2023distilling,
title={Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models},
author={Andy Zhou and Jindong Wang and Yu-Xiong Wang and Haohan Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iwp3H8uSeK}
} | We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation. We address the conjecture that larger models do not make for better teachers by showing strong gains in out-of-distribution robustness when distilling from pretrained foundation models. Following this finding, we propose Discrete Adversarial Distillation (DAD), which leverages a robust teacher to generate adversarial examples and a VQGAN to discretize them, creating more informative samples than standard data augmentation techniques. We provide a theoretical framework for the use of a robust teacher in the knowledge distillation with data augmentation setting and demonstrate strong gains in out-of-distribution robustness and clean accuracy across different student architectures. Notably, our method adds minor computational overhead compared to similar techniques and can be easily combined with other data augmentations for further improvements. | Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models | [
"Andy Zhou",
"Jindong Wang",
"Yu-Xiong Wang",
"Haohan Wang"
] | Conference | poster | 2311.01441 | [
"https://github.com/lapisrocks/DiscreteAdversarialDistillation"
] | https://huggingface.co/papers/2311.01441 | 0 | 1 | 0 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=iv2sTQtbst | @inproceedings{
wang2023patch,
title={Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models},
author={Zhendong Wang and Yifan Jiang and Huangjie Zheng and Peihao Wang and Pengcheng He and Zhangyang Wang and Weizhu Chen and Mingyuan Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iv2sTQtbst}
} | Diffusion models are powerful, but they require a lot of time and data to train. We propose Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training time costs while improving data efficiency, which thus helps democratize diffusion model training to broader users. At the core of our innovations is a new conditional score function at the patch level, where the patch location in the original image is included as additional coordinate channels, while the patch size is randomized and diversified throughout training to encode the cross-region dependency at multiple scales. Sampling with our method is as easy as in the original diffusion model. Through Patch Diffusion, we could achieve $\mathbf{\ge 2\times}$ faster training, while maintaining comparable or better generation quality. Patch Diffusion meanwhile improves the performance of diffusion models trained on relatively small datasets, $e.g.$, as few as 5,000 images to train from scratch. We achieve outstanding FID scores in line with state-of-the-art benchmarks: 1.77 on CelebA-64$\times$64, 1.93 on AFHQv2-Wild-64$\times$64, and 2.72 on ImageNet-256$\times$256. We share our code and pre-trained models at https://github.com/Zhendong-Wang/Patch-Diffusion. | Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models | [
"Zhendong Wang",
"Yifan Jiang",
"Huangjie Zheng",
"Peihao Wang",
"Pengcheng He",
"Zhangyang Wang",
"Weizhu Chen",
"Mingyuan Zhou"
] | Conference | poster | 2304.12526 | [
"https://github.com/Zhendong-Wang/Patch-Diffusion"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iuqCXg1Gng | @inproceedings{
pesme2023saddletosaddle,
title={Saddle-to-Saddle Dynamics in Diagonal Linear Networks},
author={Scott Pesme and Nicolas Flammarion},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iuqCXg1Gng}
} | In this paper we fully describe the trajectory of gradient flow over $2$-layer diagonal linear networks for the regression setting in the limit of vanishing initialisation. We show that the limiting flow successively jumps from a saddle of the training loss to another until reaching the minimum $\ell_1$-norm solution. We explicitly characterise the visited saddles as well as the jump times through a recursive algorithm reminiscent of the LARS algorithm used for computing the Lasso path. Starting from the zero vector, coordinates are successively activated until the minimum $\ell_1$-norm solution is recovered, revealing an incremental learning. Our proof leverages a convenient arc-length time-reparametrisation which enables to keep track of the transitions between the jumps. Our analysis requires negligible assumptions on the data, applies to both under and overparametrised settings and covers complex cases where there is no monotonicity of the number of active coordinates. We provide numerical experiments to support our findings. | Saddle-to-Saddle Dynamics in Diagonal Linear Networks | [
"Scott Pesme",
"Nicolas Flammarion"
] | Conference | spotlight | 2304.00488 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=irRHgjePdR | @inproceedings{
ren2023improving,
title={Improving Compositional Generalization using Iterated Learning and Simplicial Embeddings},
author={Yi Ren and Samuel Lavoie and Mikhail Galkin and Danica J. Sutherland and Aaron Courville},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=irRHgjePdR}
} | Compositional generalization, the ability of an agent to generalize to unseen combinations of latent factors, is easy for humans but hard for deep neural networks. A line of research in cognitive science has hypothesized a process, "iterated learning," to help explain how human language developed this ability; the theory rests on simultaneous pressures towards compressibility (when an ignorant agent learns from an informed one) and expressivity (when it uses the representation for downstream tasks). Inspired by this process, we propose to improve the compositional generalization of deep networks by using iterated learning on models with simplicial embeddings, which can approximately discretize representations. This approach is further motivated by an analysis of compositionality based on Kolmogorov complexity. We show that this combination of changes improves compositional generalization over other approaches, demonstrating these improvements both on vision tasks with well-understood latent factors and on real molecular graph prediction tasks where the latent structure is unknown. | Improving Compositional Generalization using Iterated Learning and Simplicial Embeddings | [
"Yi Ren",
"Samuel Lavoie",
"Mikhail Galkin",
"Danica J. Sutherland",
"Aaron Courville"
] | Conference | poster | 2310.18777 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ir6WWkFR80 | @inproceedings{
wang2023punctuationlevel,
title={Punctuation-level Attack: Single-shot and Single Punctuation Can Fool Text Models},
author={Wenqiang Wang and Chongyang Du and Tao Wang and Kaihao Zhang and Wenhan Luo and Lin Ma and Wei Liu and Xiaochun Cao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ir6WWkFR80}
} | The adversarial attacks have attracted increasing attention in various fields including natural language processing. The current textual attacking models primarily focus on fooling models by adding character-/word-/sentence-level perturbations, ignoring their influence on human perception. In this paper, for the first time in the community, we propose a novel mode of textual attack, punctuation-level attack. With various types of perturbations, including insertion, displacement, deletion, and replacement, the punctuation-level attack achieves promising fooling rates against SOTA models on typical textual tasks and maintains minimal influence on human perception and understanding of the text by mere perturbation of single-shot single punctuation. Furthermore, we propose a search method named Text Position Punctuation Embedding and Paraphrase (TPPEP) to accelerate the pursuit of optimal position to deploy the attack, without exhaustive search, and we present a mathematical interpretation of TPPEP. Thanks to the integrated Text Position Punctuation Embedding (TPPE), the punctuation attack can be applied at a constant cost of time. Experimental results on public datasets and SOTA models demonstrate the effectiveness of the punctuation attack and the proposed TPPE. We additionally apply the single punctuation attack to summarization, semantic-similarity-scoring, and text-to-image tasks, and achieve encouraging results. | Punctuation-level Attack: Single-shot and Single Punctuation Can Fool Text Models | [
"Wenqiang Wang",
"Chongyang Du",
"Tao Wang",
"Kaihao Zhang",
"Wenhan Luo",
"Lin Ma",
"Wei Liu",
"Xiaochun Cao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iqezE0EyXq | @inproceedings{
veerabadran2023adaptive,
title={Adaptive recurrent vision performs zero-shot computation scaling to unseen difficulty levels},
author={Vijay Veerabadran and Srinivas Ravishankar and Yuan Tang and Ritik Raina and Virginia R. de Sa},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iqezE0EyXq}
} | Humans solving algorithmic (or) reasoning problems typically exhibit solution times that grow as a function of problem difficulty.
Adaptive recurrent neural networks have been shown to exhibit this property for various language-processing tasks. However, little work has been performed to assess whether such adaptive computation can also enable vision models to extrapolate solutions beyond their training distribution's difficulty level, with prior work focusing on very simple tasks. In this study, we investigate a critical functional role of such adaptive processing using recurrent neural networks: to dynamically scale computational resources conditional on input requirements that allow for zero-shot generalization to novel difficulty levels not seen during training using two challenging visual reasoning tasks: PathFinder and Mazes. We combine convolutional recurrent neural networks (ConvRNNs) with a learnable halting mechanism based on Graves (2016). We explore various implementations of such adaptive ConvRNNs (AdRNNs) ranging from tying weights across layers to more sophisticated biologically inspired recurrent networks that possess lateral connections and gating. We show that 1) AdRNNs learn to dynamically halt processing early (or late) to solve easier (or harder) problems, 2) these RNNs zero-shot generalize to more difficult problem settings not shown during training by dynamically increasing the number of recurrent iterations at test time. Our study provides modeling evidence supporting the hypothesis that recurrent processing enables the functional advantage of adaptively allocating compute resources conditional on input requirements and hence allowing generalization to harder difficulty levels of a visual reasoning problem without training. | Adaptive recurrent vision performs zero-shot computation scaling to unseen difficulty levels | [
"Vijay Veerabadran",
"Srinivas Ravishankar",
"Yuan Tang",
"Ritik Raina",
"Virginia R. de Sa"
] | Conference | poster | 2311.06964 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iohoef1bfM | @inproceedings{
wang2023generalized,
title={Generalized Belief Transport},
author={Junqi Wang and PEI WANG and Patrick Shafto},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iohoef1bfM}
} | Human learners have ability to adopt appropriate learning approaches depending on constraints such as prior on the hypothesis, urgency of decision, and drift of the environment. However, existing learning models are typically considered individually rather than in relation to one and other. To build agents that have the ability to move between different modes of learning over time, it is important to understand how learning models are related as points in a broader space of possibilities. We introduce a mathematical framework, Generalized Belief Transport (GBT), that unifies and generalizes prior models, including Bayesian inference, cooperative communication and classification, as parameterizations of three learning constraints within Unbalanced Optimal Transport (UOT). We visualize the space of learning models encoded by GBT as a cube which includes classic learning models as special points. We derive critical properties of this parameterized space including proving continuity and differentiability which is the basis for model interpolation, and study limiting behavior of the parameters, which allows attaching learning models on the boundaries. Moreover, we investigate the long-run behavior of GBT, explore convergence properties of models in GBT mathematical and computationally, document the ability to learn in the presence of distribution drift, and formulate conjectures about general behavior. We conclude with open questions and implications for more unified models of learning. | Generalized Belief Transport | [
"Junqi Wang",
"PEI WANG",
"Patrick Shafto"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=inIONNg8Sq | @inproceedings{
solinas2023history,
title={History Filtering in Imperfect Information Games: Algorithms and Complexity},
author={Christopher Solinas and Doug Rebstock and Nathan R. Sturtevant and Michael Buro},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=inIONNg8Sq}
} | Historically applied exclusively to perfect information games, depth-limited search with value functions has been key to recent advances in AI for imperfect information games. Most prominent approaches with strong theoretical guarantees require *subgame decomposition* - a process in which a subgame is computed from public information and player beliefs. However, subgame decomposition can itself require non-trivial computations, and its tractability depends on the existence of efficient algorithms for either full enumeration or generation of the histories that form the root of the subgame. Despite this, no formal analysis of the tractability of such computations has been established in prior work, and application domains have often consisted of games, such as poker, for which enumeration is trivial on modern hardware.
Applying these ideas to more complex domains requires understanding their cost. In this work, we introduce and analyze the computational aspects and tractability of filtering histories for subgame decomposition. We show that constructing a single history from the root of the subgame is generally intractable, and then provide a necessary and sufficient condition for efficient enumeration. We also introduce a novel Markov Chain Monte Carlo-based generation algorithm for trick-taking card games - a domain where enumeration is often prohibitively expensive. Our experiments demonstrate its improved scalability in the trick-taking card game *Oh Hell*.
These contributions clarify when and how depth-limited search via subgame decomposition can be an effective tool for sequential decision-making in imperfect information settings. | History Filtering in Imperfect Information Games: Algorithms and Complexity | [
"Christopher Solinas",
"Doug Rebstock",
"Nathan R. Sturtevant",
"Michael Buro"
] | Conference | poster | 2311.14651 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ikkdTD3hQJ | @inproceedings{
qi2023aims,
title={{AIMS}: All-Inclusive Multi-Level Segmentation for Anything},
author={Lu Qi and Jason Kuen and Weidong Guo and Jiuxiang Gu and Zhe Lin and Bo Du and Yu Xu and Ming-Hsuan Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ikkdTD3hQJ}
} | Despite the progress of image segmentation for accurate visual entity segmentation, completing the diverse requirements of image editing applications for different-level region-of-interest selections remains unsolved. In this paper, we propose a new task, All-Inclusive Multi-Level Segmentation (AIMS), which segments visual regions into three levels: part, entity, and relation (two entities with some semantic relationships). We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation. Specifically, we propose task complementarity, association, and prompt mask encoder for three-level predictions. Extensive experiments demonstrate the effectiveness and generalization capacity of our method compared to other state-of-the-art methods on a single dataset or the concurrent work on segment anything. We will make our code and training model publicly available. | AIMS: All-Inclusive Multi-Level Segmentation for Anything | [
"Lu Qi",
"Jason Kuen",
"Weidong Guo",
"Jiuxiang Gu",
"Zhe Lin",
"Bo Du",
"Yu Xu",
"Ming-Hsuan Yang"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ij3svnPLzG | @inproceedings{
dai2023semisupervised,
title={Semi-Supervised Contrastive Learning for Deep Regression with Ordinal Rankings from Spectral Seriation},
author={Weihang Dai and Yao DU and Hanru Bai and Kwang-Ting Cheng and Xiaomeng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ij3svnPLzG}
} | Contrastive learning methods can be applied to deep regression by enforcing label distance relationships in feature space. However, these methods are limited to labeled data only unlike for classification, where unlabeled data can be used for contrastive pretraining. In this work, we extend contrastive regression methods to allow unlabeled data to be used in a semi-supervised setting, thereby reducing the reliance on manual annotations. We observe that the feature similarity matrix between unlabeled samples still reflect inter-sample relationships, and that an accurate ordinal relationship can be recovered through spectral seriation algorithms if the level of error is within certain bounds. By using the recovered ordinal relationship for contrastive learning on unlabeled samples, we can allow more data to be used for feature representation learning, thereby achieve more robust results. The ordinal rankings can also be used to supervise predictions on unlabeled samples, which can serve as an additional training signal. We provide theoretical guarantees and empirical support through experiments on different datasets, demonstrating that our method can surpass existing state-of-the-art semi-supervised deep regression methods. To the best of our knowledge, this work is the first to explore using unlabeled data to perform contrastive learning for regression. | Semi-Supervised Contrastive Learning for Deep Regression with Ordinal Rankings from Spectral Seriation | [
"Weihang Dai",
"Yao DU",
"Hanru Bai",
"Kwang-Ting Cheng",
"Xiaomeng Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iif9mGCTfy | @inproceedings{
yi2023frequencydomain,
title={Frequency-domain {MLP}s are More Effective Learners in Time Series Forecasting},
author={Kun Yi and Qi Zhang and Wei Fan and Shoujin Wang and Pengyang Wang and Hui He and Ning An and Defu Lian and Longbing Cao and Zhendong Niu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iif9mGCTfy}
} | Time series forecasting has played the key role in different industrial, including finance, traffic, energy, and healthcare domains. While existing literatures have designed many sophisticated architectures based on RNNs, GNNs, or Transformers, another kind of approaches based on multi-layer perceptrons (MLPs) are proposed with simple structure, low complexity, and superior performance. However, most MLP-based forecasting methods suffer from the point-wise mappings and information bottleneck, which largely hinders the forecasting performance. To overcome this problem, we explore a novel direction of applying MLPs in the frequency domain for time series forecasting. We investigate the learned patterns of frequency-domain MLPs and discover their two inherent characteristic benefiting forecasting, (i) global view: frequency spectrum makes MLPs own a complete view for signals and learn global dependencies more easily, and (ii) energy compaction: frequency-domain MLPs concentrate on smaller key part of frequency components with compact signal energy. Then, we propose FreTS, a simple yet effective architecture built upon Frequency-domain MLPs for Time Series forecasting. FreTS mainly involves two stages, (i) Domain Conversion, that transforms time-domain signals into complex numbers of frequency domain; (ii) Frequency Learning, that performs our redesigned MLPs for the learning of real and imaginary part of frequency components. The above stages operated on both inter-series and intra-series scales further contribute to channel-wise and time-wise dependency learning. Extensive experiments on 13 real-world benchmarks (including 7 benchmarks for short-term forecasting and 6 benchmarks for long-term forecasting) demonstrate our consistent superiority over state-of-the-art methods. Code is available at this repository: https://github.com/aikunyi/FreTS. | Frequency-domain MLPs are More Effective Learners in Time Series Forecasting | [
"Kun Yi",
"Qi Zhang",
"Wei Fan",
"Shoujin Wang",
"Pengyang Wang",
"Hui He",
"Ning An",
"Defu Lian",
"Longbing Cao",
"Zhendong Niu"
] | Conference | poster | 2311.06184 | [
"https://github.com/WenjieDu/PyPOTS"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=ihlT8yvQ2I | @inproceedings{
zheng2023gnnevaluator,
title={{GNNE}valuator: Evaluating {GNN} Performance On Unseen Graphs Without Labels},
author={Xin Zheng and Miao Zhang and Chunyang Chen and Soheila Molaei and Chuan Zhou and Shirui Pan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ihlT8yvQ2I}
} | Evaluating the performance of graph neural networks (GNNs) is an essential task for practical GNN model deployment and serving, as deployed GNNs face significant performance uncertainty when inferring on unseen and unlabeled test graphs, due to mismatched training-test graph distributions. In this paper, we study a *new* problem, **GNN model evaluation**, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs, by precisely estimating its performance (e.g., node classification accuracy) on unseen graphs without labels. Concretely, we propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference. The DiscGraph set captures wide-range and diverse graph data distribution discrepancies through a discrepancy measurement function, which exploits the GNN outputs of latent node embeddings and node class predictions. Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model and makes an accurate inference for evaluating GNN model performance. Extensive experiments on real-world unseen and unlabeled test graphs demonstrate the effectiveness of our proposed method for GNN model evaluation. | GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels | [
"Xin Zheng",
"Miao Zhang",
"Chunyang Chen",
"Soheila Molaei",
"Chuan Zhou",
"Shirui Pan"
] | Conference | poster | 2310.14586 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=igE3Zbxvws | @inproceedings{
brusca2023maximum,
title={Maximum Independent Set: Self-Training through Dynamic Programming},
author={Lorenzo Brusca and Lars C.P.M. Quaedvlieg and Stratis Skoulakis and Grigorios Chrysos and Volkan Cevher},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=igE3Zbxvws}
} | This work presents a graph neural network (GNN) framework for solving the maximum independent set (MIS) problem, inspired by dynamic programming (DP). Specifically, given a graph, we propose a DP-like recursive algorithm based on GNNs that firstly constructs two smaller sub-graphs, predicts the one with the larger MIS, and then uses it in the next recursive call. To train our algorithm, we require annotated comparisons of different graphs concerning their MIS size. Annotating the comparisons with the output of our algorithm leads to a self-training process that results in more accurate self-annotation of the comparisons and vice versa. We provide numerical evidence showing the superiority of our method vs prior methods in multiple synthetic and real-world datasets. | Maximum Independent Set: Self-Training through Dynamic Programming | [
"Lorenzo Brusca",
"Lars C.P.M. Quaedvlieg",
"Stratis Skoulakis",
"Grigorios Chrysos",
"Volkan Cevher"
] | Conference | poster | 2310.18672 | [
"https://github.com/LIONS-EPFL/dynamic-MIS"
] | https://huggingface.co/papers/2310.18672 | 1 | 1 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=ifbF4WdT8f | @inproceedings{
chen2023evoprompting,
title={EvoPrompting: Language Models for Code-Level Neural Architecture Search},
author={Angelica Chen and David Dohan and David So},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ifbF4WdT8f}
} | Given the recent impressive accomplishments of language models (LMs) for code generation, we explore the use of LMs as general adaptive mutation and crossover operators for an evolutionary neural architecture search (NAS) algorithm.
While NAS still proves too difficult a task for LMs to succeed at solely through prompting, we find that the combination of evolutionary prompt engineering with soft prompt-tuning, a method we term EvoPrompting, consistently finds diverse and high performing models. We first demonstrate that EvoPrompting is effective on the computationally efficient MNIST-1D dataset, where EvoPrompting produces convolutional architecture variants that outperform both those designed by human experts and naive few-shot prompting in terms of accuracy and model size. We then apply our method to searching for graph neural networks on the CLRS Algorithmic Reasoning Benchmark, where EvoPrompting is able to design *novel* architectures that outperform current state-of-the-art models on 21 out of 30 algorithmic reasoning tasks while maintaining similar model size. EvoPrompting is successful at designing accurate and efficient neural network architectures across a variety of machine learning tasks, while also being general enough for easy adaptation to other tasks beyond neural network design. | EvoPrompting: Language Models for Code-Level Neural Architecture Search | [
"Angelica Chen",
"David Dohan",
"David So"
] | Conference | poster | 2302.14838 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=icWwBKyVMs | @inproceedings{
seo2023interpretable,
title={Interpretable Prototype-based Graph Information Bottleneck},
author={Sangwoo Seo and Sungwon Kim and Chanyoung Park},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=icWwBKyVMs}
} | The success of Graph Neural Networks (GNNs) has led to a need for understanding their decision-making process and providing explanations for their predictions, which has given rise to explainable AI (XAI) that offers transparent explanations for black-box models. Recently, the use of prototypes has successfully improved the explainability of models by learning prototypes to imply training graphs that affect the prediction. However, these approaches tend to provide prototypes with excessive information from the entire graph, leading to the exclusion of key substructures or the inclusion of irrelevant substructures, which can limit both the interpretability and the performance of the model in downstream tasks. In this work, we propose a novel framework of explainable GNNs, called interpretable Prototype-based Graph Information Bottleneck (PGIB) that incorporates prototype learning within the information bottleneck framework to provide prototypes with the key subgraph from the input graph that is important for the model prediction. This is the first work that incorporates prototype learning into the process of identifying the key subgraphs that have a critical impact on the prediction performance. Extensive experiments, including qualitative analysis, demonstrate that PGIB outperforms state-of-the-art methods in terms of both prediction performance and explainability. | Interpretable Prototype-based Graph Information Bottleneck | [
"Sangwoo Seo",
"Sungwon Kim",
"Chanyoung Park"
] | Conference | poster | 2310.19906 | [
"https://github.com/sang-woo-seo/pgib"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iajxrSgOSX | @inproceedings{
kwon2023deliffas,
title={{DELIFFAS}: Deformable Light Fields for Fast Avatar Synthesis},
author={YoungJoong Kwon and Lingjie Liu and Henry Fuchs and Marc Habermann and Christian Theobalt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iajxrSgOSX}
} | Generating controllable and photorealistic digital human avatars is a long-standing and important problem in Vision and Graphics. Recent methods have shown great progress in terms of either photorealism or inference speed while the combination of the two desired properties still remains unsolved. To this end, we propose a novel method, called DELIFFAS, which parameterizes the appearance of the human as a surface light field that is attached to a controllable and deforming human mesh model. At the core, we represent the light field around the human with a deformable two-surface parameterization, which enables fast and accurate inference of the human appearance. This allows perceptual supervision on the full image compared to previous approaches that could only supervise individual pixels or small patches due to their slow runtime. Our carefully designed human representation and supervision strategy leads to state-of-the-art synthesis results and inference time. The video results and code are available at https://vcai.mpi-inf.mpg.de/projects/DELIFFAS. | DELIFFAS: Deformable Light Fields for Fast Avatar Synthesis | [
"YoungJoong Kwon",
"Lingjie Liu",
"Henry Fuchs",
"Marc Habermann",
"Christian Theobalt"
] | Conference | poster | 2310.11449 | [
""
] | https://huggingface.co/papers/2310.11449 | 0 | 1 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=ia4AL3QnOv | @inproceedings{
oesterheld2023similaritybased,
title={Similarity-based cooperative equilibrium},
author={Caspar Oesterheld and Johannes Treutlein and Roger Baker Grosse and Vincent Conitzer and Jakob Nicolaus Foerster},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=ia4AL3QnOv}
} | As machine learning agents act more autonomously in the world, they will increasingly interact with each other. Unfortunately, in many social dilemmas like the one-shot Prisoner’s Dilemma, standard game theory predicts that ML agents will fail to cooperate with each other. Prior work has shown that one way to enable cooperative outcomes in the one-shot Prisoner’s Dilemma is to make the agents mutually transparent to each other, i.e., to allow them to access one another’s source code (Rubinstein, 1998; Tennenholtz, 2004) – or weights in the case of ML agents. However, full transparency is often unrealistic, whereas partial transparency is commonplace. Moreover, it is challenging for agents to learn their way to cooperation in the full transparency setting. In this paper, we introduce a more realistic setting in which agents only observe a single number indicating how similar they are to each other. We prove that this allows for the same set of cooperative outcomes as the full transparency setting. We also demonstrate experimentally that cooperation can be learned using simple ML methods. | Similarity-based cooperative equilibrium | [
"Caspar Oesterheld",
"Johannes Treutlein",
"Roger Baker Grosse",
"Vincent Conitzer",
"Jakob Nicolaus Foerster"
] | Conference | poster | 2211.14468 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iWWLgcUTZU | @inproceedings{
lou2023pcfgan,
title={{PCF}-{GAN}: generating sequential data via the characteristic function of measures on the path space},
author={Hang Lou and Siran Li and Hao Ni},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iWWLgcUTZU}
} | Generating high-fidelity time series data using generative adversarial networks (GANs) remains a challenging task, as it is difficult to capture the temporal dependence of joint probability distributions induced by time-series data. Towards this goal, a key step is the development of an effective discriminator to distinguish between time series distributions. We propose the so-called PCF-GAN, a novel GAN that incorporates the path characteristic function (PCF) as the principled representation of time series distribution into the discriminator to enhance its generative performance. On the one hand, we establish theoretical foundations of the PCF distance by proving its characteristicity, boundedness, differentiability with respect to generator parameters, and weak continuity, which ensure the stability and feasibility of training the PCF-GAN. On the other hand, we design efficient initialisation and optimisation schemes for PCFs to strengthen the discriminative power and accelerate training efficiency. To further boost the capabilities of complex time series generation, we integrate the auto-encoder structure via sequential embedding into the PCF-GAN, which provides additional reconstruction functionality. Extensive numerical experiments on various datasets demonstrate the consistently superior performance of PCF-GAN over state-of-the-art baselines, in both generation and reconstruction quality. | PCF-GAN: generating sequential data via the characteristic function of measures on the path space | [
"Hang Lou",
"Siran Li",
"Hao Ni"
] | Conference | poster | 2305.12511 | [
"https://github.com/deepintostreams/pcf-gan"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iWGC0Nsq9i | @inproceedings{
chehab2023provable,
title={Provable benefits of annealing for estimating normalizing constants: Importance Sampling, Noise-Contrastive Estimation, and beyond},
author={Omar Chehab and Aapo Hyvarinen and Andrej Risteski},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iWGC0Nsq9i}
} | Recent research has developed several Monte Carlo methods for estimating the normalization constant (partition function) based on the idea of annealing. This means sampling successively from a path of distributions which interpolate between a tractable "proposal" distribution and the unnormalized "target" distribution. Prominent estimators in this family include annealed importance sampling and annealed noise-contrastive estimation (NCE). Such methods hinge on a number of design choices: which estimator to use, which path of distributions to use and whether to use a path at all; so far, there is no definitive theory on which choices are efficient. Here, we evaluate each design choice by the asymptotic estimation error it produces. First, we show that using NCE is more efficient than the importance sampling estimator, but in the limit of infinitesimal path steps, the difference vanishes. Second, we find that using the geometric path brings down the estimation error from an exponential to a polynomial function of the parameter distance between the target and proposal distributions. Third, we find that the arithmetic path, while rarely used, can offer optimality properties over the universally-used geometric path. In fact, in a particular limit, the optimal path is arithmetic. Based on this theory, we finally propose a two-step estimator to approximate the optimal path in an efficient way. | Provable benefits of annealing for estimating normalizing constants: Importance Sampling, Noise-Contrastive Estimation, and beyond | [
"Omar Chehab",
"Aapo Hyvarinen",
"Andrej Risteski"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iVYInarGXg | @inproceedings{
chen2023on,
title={On the Identifiability and Interpretability of Gaussian Process Models},
author={Jiawen Chen and Wancen Mu and Yun Li and Didong Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iVYInarGXg}
} | In this paper, we critically examine the prevalent practice of using additive mixtures of Mat\'ern kernels in single-output Gaussian process (GP) models and explore the properties of multiplicative mixtures of Mat\'ern kernels for multi-output GP models. For the single-output case, we derive a series of theoretical results showing that the smoothness of a mixture of Mat\'ern kernels is determined by the least smooth component and that a GP with such a kernel is effectively equivalent to the least smooth kernel component. Furthermore, we demonstrate that none of the mixing weights or parameters within individual kernel components are identifiable. We then turn our attention to multi-output GP models and analyze the identifiability of the covariance matrix $A$ in the multiplicative kernel $K(x,y) = AK_0(x,y)$, where $K_0$ is a standard single output kernel such as Mat\'ern. We show that $A$ is identifiable up to a multiplicative constant, suggesting that multiplicative mixtures are well suited for multi-output tasks. Our findings are supported by extensive simulations and real applications for both single- and multi-output settings. This work provides insight into kernel selection and interpretation for GP models, emphasizing the importance of choosing appropriate kernel structures for different tasks. | On the Identifiability and Interpretability of Gaussian Process Models | [
"Jiawen Chen",
"Wancen Mu",
"Yun Li",
"Didong Li"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iT9MOAZqsb | @inproceedings{
kumano2023adversarial,
title={Adversarial Training from Mean Field Perspective},
author={Soichiro Kumano and Hiroshi Kera and Toshihiko Yamasaki},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iT9MOAZqsb}
} | Although adversarial training is known to be effective against adversarial examples, training dynamics are not well understood. In this study, we present the first theoretical analysis of adversarial training in random deep neural networks without any assumptions on data distributions. We introduce a new theoretical framework based on mean field theory, which addresses the limitations of existing mean field-based approaches. Based on the framework, we derive the (empirically tight) upper bounds of $\ell_q$ norm-based adversarial loss with $\ell_p$ norm-based adversarial examples for various values of $p$ and $q$. Moreover, we prove that networks without shortcuts are generally not adversarially trainable and that adversarial training reduces network capacity. We also show that the network width alleviates these issues. Furthermore, the various impacts of input and output dimensions on the upper bounds and time evolution of weight variance are presented. | Adversarial Training from Mean Field Perspective | [
"Soichiro Kumano",
"Hiroshi Kera",
"Toshihiko Yamasaki"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iSd8g75QvP | @inproceedings{
hanneke2023a,
title={A Trichotomy for Transductive Online Learning},
author={Steve Hanneke and Shay Moran and Jonathan Shafer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iSd8g75QvP}
} | We present new upper and lower bounds on the number of learner mistakes in the `transductive' online learning setting of Ben-David, Kushilevitz and Mansour (1997).
This setting is similar to standard online learning, except that the adversary fixes a sequence of instances $x_1,\dots,x_n$ to be labeled at the start of the game, and this sequence is known to the learner.
Qualitatively, we prove a \emph{trichotomy}, stating that the minimal number of mistakes made by the learner as $n$ grows can take only one of precisely three possible values: $n$, $\Theta\left(\log (n)\right)$, or $\Theta(1)$.
Furthermore, this behavior is determined by a combination of the VC dimension and the Littlestone dimension.
Quantitatively, we show a variety of bounds relating the number of mistakes to well-known combinatorial dimensions.
In particular, we improve the known lower bound on the constant in the $\Theta(1)$ case from $\Omega\left(\sqrt{\log(d)}\right)$ to $\Omega(\log(d))$ where $d$ is the Littlestone dimension.
Finally, we extend our results to cover multiclass classification and the agnostic setting. | A Trichotomy for Transductive Online Learning | [
"Steve Hanneke",
"Shay Moran",
"Jonathan Shafer"
] | Conference | poster | 2311.06428 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iQlK3VJxV7 | @inproceedings{
hao2023uncertaintyaware,
title={Uncertainty-Aware Alignment Network for Cross-Domain Video-Text Retrieval},
author={Xiaoshuai Hao and Wanqian Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iQlK3VJxV7}
} | Video-text retrieval is an important but challenging research task in the multimedia community. In this paper, we address the challenge task of Unsupervised Domain Adaptation Video-text Retrieval (UDAVR), assuming that training (source) data and testing (target) data are from different domains. Previous approaches are mostly derived from classification based domain adaptation methods, which are neither multi-modal nor suitable for retrieval task. In addition, as to the pairwise misalignment issue in target domain, i.e., no pairwise annotations between target videos and texts, the existing method assumes that a video corresponds to a text. Yet we empirically find that in the real scene, one text usually corresponds to multiple videos and vice versa. To tackle this one-to-many issue, we propose a novel method named Uncertainty-aware Alignment Network (UAN). Specifically, we first introduce the multimodal mutual information module to balance the minimization of domain shift in a smooth manner. To tackle the multimodal uncertainties pairwise misalignment in target domain, we propose the Uncertainty-aware Alignment Mechanism (UAM) to fully exploit the semantic information of both modalities in target domain. Extensive experiments in the context of domain-adaptive video-text retrieval demonstrate that our proposed method consistently outperforms multiple baselines, showing a superior generalization ability for target data. | Uncertainty-Aware Alignment Network for Cross-Domain Video-Text Retrieval | [
"Xiaoshuai Hao",
"Wanqian Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iPTF2hON1C | @inproceedings{
paulus2023learning,
title={Learning To Dive In Branch And Bound},
author={Max B. Paulus and Andreas Krause},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iPTF2hON1C}
} | Primal heuristics are important for solving mixed integer linear programs, because they find feasible solutions that facilitate branch and bound search. A prominent group of primal heuristics are diving heuristics. They iteratively modify and resolve linear programs to conduct a depth-first search from any node in the search tree. Existing divers rely on generic decision rules that fail to exploit structural commonality between similar problem instances that often arise in practice. Therefore, we propose L2Dive to learn specific diving heuristics with graph neural networks: We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions based on the model's predictions. L2Dive is fully integrated into the open-source solver SCIP. We find that L2Dive outperforms standard divers to find better feasible solutions on a range of combinatorial optimization problems. For real-world applications from server load balancing and neural network verification, L2Dive improves the primal-dual integral by up to 7% (35%) on average over a tuned (default) solver baseline and reduces average solving time by 20% (29%). | Learning To Dive In Branch And Bound | [
"Max B. Paulus",
"Andreas Krause"
] | Conference | poster | 2301.09943 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iMfPFPMsZo | @inproceedings{
chakrabarty2023parallel,
title={Parallel Submodular Function Minimization},
author={Deeparnab Chakrabarty and Andrei Graur and Haotian Jiang and Aaron Sidford},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iMfPFPMsZo}
} | We consider the parallel complexity of submodular function minimization (SFM).
We provide a pair of methods which obtain two new query versus depth trade-offs a submodular function defined on subsets of $n$ elements that has integer values between $-M$ and $M$. The first method has depth $2$ and query complexity $n^{O(M)}$ and the second method has depth $\widetilde{O}(n^{1/3} M^{2/3})$ and query complexity $O(\mathrm{poly}(n, M))$. Despite a line of work on improved parallel lower bounds for SFM, prior to our work the only known algorithms for parallel SFM either followed from more general methods for sequential SFM or highly-parallel minimization of convex $\ell_2$-Lipschitz functions. Interestingly, to obtain our second result we provide the first highly-parallel algorithm for minimizing $\ell_\infty$-Lipschitz function over the hypercube which obtains near-optimal depth for obtaining constant accuracy. | Parallel Submodular Function Minimization | [
"Deeparnab Chakrabarty",
"Andrei Graur",
"Haotian Jiang",
"Aaron Sidford"
] | Conference | spotlight | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iM0MWWBr4W | @inproceedings{
brukhim2023a,
title={A Unified Model and Dimension for Interactive Estimation},
author={Nataly Brukhim and Miroslav Dud{\'\i}k and Aldo Pacchiano and Robert E. Schapire},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iM0MWWBr4W}
} | We study an abstract framework for interactive learning called interactive estimation in which the goal is to estimate a target from its ``similarity'' to points queried by the learner.
We introduce a combinatorial measure called Dissimilarity dimension which largely captures learnability in our model.
We present a simple, general, and broadly-applicable algorithm, for which we obtain both regret and PAC generalization bounds that are polynomial in the new dimension. We show that our framework subsumes and thereby unifies two classic learning models:
statistical-query learning and structured bandits. We also delineate how the Dissimilarity dimension is related to well-known parameters for both frameworks, in some cases yielding significantly improved analyses. | A Unified Model and Dimension for Interactive Estimation | [
"Nataly Brukhim",
"Miroslav Dudík",
"Aldo Pacchiano",
"Robert E. Schapire"
] | Conference | poster | 2306.06184 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iKarSI2a73 | @inproceedings{
chen2023bicriteria,
title={Bicriteria Approximation Algorithms for the Submodular Cover Problem},
author={Wenjing Chen and Victoria G. Crawford},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iKarSI2a73}
} | In this paper, we consider the optimization problem Submodular Cover (SCP), which is to find a minimum cardinality subset of a finite universe $U$ such that the value of a submodular function $f$ is above an input threshold $\tau$. In particular, we consider several variants of SCP including the general case, the case where $f$ is additionally assumed to be monotone, and finally the case where $f$ is a regularized monotone submodular function. Our most significant contributions are that: (i) We propose a scalable algorithm for monotone SCP that achieves nearly the same approximation guarantees as the standard greedy algorithm in significantly faster time; (ii) We are the first to develop an algorithm for general SCP that achieves a solution arbitrarily close to being feasible; and finally (iii) we are the first to develop algorithms for regularized SCP. Our algorithms are then demonstrated to be effective in an extensive experimental section on data summarization and graph cut, two applications of SCP. | Bicriteria Approximation Algorithms for the Submodular Cover Problem | [
"Wenjing Chen",
"Victoria G. Crawford"
] | Conference | poster | 2309.14558 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iImnbUVhok | @inproceedings{
sordoni2023joint,
title={Joint Prompt Optimization of Stacked {LLM}s using Variational Inference},
author={Alessandro Sordoni and Xingdi Yuan and Marc-Alexandre C{\^o}t{\'e} and Matheus Pereira and Adam Trischler and Ziang Xiao and Arian Hosseini and Friederike Niedtner and Nicolas Le Roux},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iImnbUVhok}
} | Large language models (LLMs) can be seen as atomic units of computation mapping sequences to a distribution over sequences. Thus, they can be seen as stochastic language layers in a language network, where the learnable parameters are the natural language prompts at each layer. By stacking two such layers and feeding the output of one layer to the next, we obtain a Deep Language Network (DLN). We first show how to effectively perform prompt optimization for a 1-Layer language network (DLN-1). Then, we present an extension that applies to 2-layer DLNs (DLN-2), where two prompts must be learned. The key idea is to consider the output of the first layer as a latent variable, which requires inference, and prompts to be learned as the parameters of the generative distribution. We first test the effectiveness of DLN-1 in multiple reasoning and natural language understanding tasks. Then, we show that DLN-2 can reach higher performance than a single layer, showing promise that we might reach comparable performance to GPT-4, even when each LLM in the network is smaller and less powerful. | Joint Prompt Optimization of Stacked LLMs using Variational Inference | [
"Alessandro Sordoni",
"Xingdi Yuan",
"Marc-Alexandre Côté",
"Matheus Pereira",
"Adam Trischler",
"Ziang Xiao",
"Arian Hosseini",
"Friederike Niedtner",
"Nicolas Le Roux"
] | Conference | poster | 2306.12509 | [
"https://github.com/microsoft/deep-language-networks"
] | https://huggingface.co/papers/2306.12509 | 6 | 14 | 0 | 9 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=iGmDQn4CRj | @inproceedings{
shwartz-ziv2023simplifying,
title={Simplifying Neural Network Training Under Class Imbalance},
author={Ravid Shwartz-Ziv and Micah Goldblum and Yucen Lily Li and C. Bayan Bruss and Andrew Gordon Wilson},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iGmDQn4CRj}
} | Real-world datasets are often highly class-imbalanced, which can adversely impact the performance of deep learning models. The majority of research on training neural networks under class imbalance has focused on specialized loss functions and sampling techniques. Notably, we demonstrate that simply tuning existing components of standard deep learning pipelines, such as the batch size, data augmentation, architecture size, pre-training, optimizer, and label smoothing, can achieve state-of-the-art performance without any specialized loss functions or samplers. We also provide key prescriptions and considerations for training under class imbalance, and an understanding of why imbalance methods succeed or fail. | Simplifying Neural Network Training Under Class Imbalance | [
"Ravid Shwartz-Ziv",
"Micah Goldblum",
"Yucen Lily Li",
"C. Bayan Bruss",
"Andrew Gordon Wilson"
] | Conference | poster | 2312.02517 | [
"https://github.com/ravidziv/simplifyingimbalancedtraining"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iFxWrxDekd | @inproceedings{
chen2023stochastic,
title={Stochastic Collapse: How Gradient Noise Attracts {SGD} Dynamics Towards Simpler Subnetworks},
author={Feng Chen and Daniel Kunin and Atsushi Yamamura and Surya Ganguli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iFxWrxDekd}
} | In this work, we reveal a strong implicit bias of stochastic gradient descent (SGD) that drives overly expressive networks to much simpler subnetworks, thereby dramatically reducing the number of independent parameters, and improving generalization. To reveal this bias, we identify _invariant sets_, or subsets of parameter space that remain unmodified by SGD. We focus on two classes of invariant sets that correspond to simpler (sparse or low-rank) subnetworks and commonly appear in modern architectures. Our analysis uncovers that SGD exhibits a property of _stochastic attractivity_ towards these simpler invariant sets. We establish a sufficient condition for stochastic attractivity based on a competition between the loss landscape's curvature around the invariant set and the noise introduced by stochastic gradients. Remarkably, we find that an increased level of noise strengthens attractivity, leading to the emergence of attractive invariant sets associated with saddle-points or local maxima of the train loss. We observe empirically the existence of attractive invariant sets in trained deep neural networks, implying that SGD dynamics often collapses to simple subnetworks with either vanishing or redundant neurons. We further demonstrate how this simplifying process of _stochastic collapse_ benefits generalization in a linear teacher-student framework. Finally, through this analysis, we mechanistically explain why early training with large learning rates for extended periods benefits subsequent generalization. | Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks | [
"Feng Chen",
"Daniel Kunin",
"Atsushi Yamamura",
"Surya Ganguli"
] | Conference | poster | 2306.04251 | [
"https://github.com/ccffccffcc/stochastic_collapse"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iB3Ew6z4WL | @inproceedings{
swamy2023multimodnmultimodal,
title={MultiMo{DN}{\textemdash}Multimodal, Multi-Task, Interpretable Modular Networks},
author={Vinitra Swamy and Malika Satayeva and Jibril Frej and Thierry Bossy and Thijs Vogels and Martin Jaggi and Tanja K{\"a}ser and Mary-Anne Hartley},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iB3Ew6z4WL}
} | Predicting multiple real-world tasks in a single model often requires a particularly diverse feature space. Multimodal (MM) models aim to extract the synergistic predictive potential of multiple data types to create a shared feature space with aligned semantic meaning across inputs of drastically varying sizes (i.e. images, text, sound). Most current MM architectures fuse these representations in parallel, which not only limits their interpretability but also creates a dependency on modality availability. We present MultiModN, a multimodal, modular network that fuses latent representations in a sequence of any number, combination, or type of modality while providing granular real-time predictive feedback on any number or combination of predictive tasks. MultiModN's composable pipeline is interpretable-by-design, as well as innately multi-task and robust to the fundamental issue of biased missingness. We perform four experiments on several benchmark MM datasets across 10 real-world tasks (predicting medical diagnoses, academic performance, and weather), and show that MultiModN's sequential MM fusion does not compromise performance compared with a baseline of parallel fusion. By simulating the challenging bias of missing not-at-random (MNAR), this work shows that, contrary to MultiModN, parallel fusion baselines erroneously learn MNAR and suffer catastrophic failure when faced with different patterns of MNAR at inference. To the best of our knowledge, this is the first inherently MNAR-resistant approach to MM modeling. In conclusion, MultiModN provides granular insights, robustness, and flexibility without compromising performance. | MultiMoDN—Multimodal, Multi-Task, Interpretable Modular Networks | [
"Vinitra Swamy",
"Malika Satayeva",
"Jibril Frej",
"Thierry Bossy",
"Thijs Vogels",
"Martin Jaggi",
"Tanja Käser",
"Mary-Anne Hartley"
] | Conference | poster | [
"https://github.com/epfl-iglobalhealth/multimodn"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=iAcEmyhwk2 | @inproceedings{
frauen2023sharp,
title={Sharp Bounds for Generalized Causal Sensitivity Analysis},
author={Dennis Frauen and Valentyn Melnychuk and Stefan Feuerriegel},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iAcEmyhwk2}
} | Causal inference from observational data is crucial for many disciplines such as medicine and economics. However, sharp bounds for causal effects under relaxations of the unconfoundedness assumption (causal sensitivity analysis) are subject to ongoing research. So far, works with sharp bounds are restricted to fairly simple settings (e.g., a single binary treatment). In this paper, we propose a unified framework for causal sensitivity analysis under unobserved confounding in various settings. For this, we propose a flexible generalization of the marginal sensitivity model (MSM) and then derive sharp bounds for a large class of causal effects. This includes (conditional) average treatment effects, effects for mediation analysis and path analysis, and distributional effects. Furthermore, our sensitivity model is applicable to discrete, continuous, and time-varying treatments. It allows us to interpret the partial identification problem under unobserved confounding as a distribution shift in the latent confounders while evaluating the causal effect of interest. In the special case of a single binary treatment, our bounds for (conditional) average treatment effects coincide with recent optimality results for causal sensitivity analysis. Finally, we propose a scalable algorithm to estimate our sharp bounds from observational data. | Sharp Bounds for Generalized Causal Sensitivity Analysis | [
"Dennis Frauen",
"Valentyn Melnychuk",
"Stefan Feuerriegel"
] | Conference | poster | 2305.16988 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iATY9W5Xw7 | @inproceedings{
lee2023cast,
title={{CAST}: Cross-Attention in Space and Time for Video Action Recognition},
author={Dongho Lee and Jongseo Lee and Jinwoo Choi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iATY9W5Xw7}
} | Recognizing human actions in videos requires spatial and temporal understanding. Most existing action recognition models lack a balanced spatio-temporal understanding of videos. In this work, we propose a novel two-stream architecture, called Cross-Attention in Space and Time (CAST), that achieves a balanced spatio-temporal understanding of videos using only RGB input. Our proposed bottleneck cross-attention mechanism enables the spatial and temporal expert models to exchange information and make synergistic predictions, leading to improved performance. We validate the proposed method with extensive experiments on public benchmarks with different characteristics: EPIC-Kitchens-100, Something-Something-V2, and Kinetics-400. Our method consistently shows favorable performance across these datasets, while the performance of existing methods fluctuates depending on the dataset characteristics. The code is available at https://github.com/KHU-VLL/CAST. | CAST: Cross-Attention in Space and Time for Video Action Recognition | [
"Dongho Lee",
"Jongseo Lee",
"Jinwoo Choi"
] | Conference | poster | 2311.18825 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=iAAXq60Bw1 | @inproceedings{
oh2023geodesic,
title={Geodesic Multi-Modal Mixup for Robust Fine-Tuning},
author={Changdae Oh and Junhyuk So and Hoyoon Byun and YongTaek Lim and Minchul Shin and Jong-June Jeon and Kyungwoo Song},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=iAAXq60Bw1}
} | Pre-trained multi-modal models, such as CLIP, provide transferable embeddings and show promising results in diverse applications. However, the analysis of learned multi-modal embeddings is relatively unexplored, and the embedding transferability can be improved. In this work, we observe that CLIP holds separated embedding subspaces for two different modalities, and then we investigate it through the lens of \textit{uniformity-alignment} to measure the quality of learned representation. Both theoretically and empirically, we show that CLIP retains poor uniformity and alignment even after fine-tuning. Such a lack of alignment and uniformity might restrict the transferability and robustness of embeddings. To this end, we devise a new fine-tuning method for robust representation equipping better alignment and uniformity. First, we propose a \textit{Geodesic Multi-Modal Mixup} that mixes the embeddings of image and text to generate hard negative samples on the hypersphere. Then, we fine-tune the model on hard negatives as well as original negatives and positives with contrastive loss. Based on the theoretical analysis about hardness guarantee and limiting behavior, we justify the use of our method. Extensive experiments on retrieval, calibration, few- or zero-shot classification (under distribution shift), embedding arithmetic, and image captioning further show that our method provides transferable representations, enabling robust model adaptation on diverse tasks. | Geodesic Multi-Modal Mixup for Robust Fine-Tuning | [
"Changdae Oh",
"Junhyuk So",
"Hoyoon Byun",
"YongTaek Lim",
"Minchul Shin",
"Jong-June Jeon",
"Kyungwoo Song"
] | Conference | poster | 2203.03897 | [
"https://github.com/changdaeoh/multimodal-mixup"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=i913TUOvTK | @inproceedings{
chen2023cinematic,
title={Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity},
author={Zijiao Chen and Jiaxin Qing and Juan Helen Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=i913TUOvTK}
} | Reconstructing human vision from brain activities has been an appealing task that helps to understand our cognitive process. Even though recent research has seen great success in reconstructing static images from non-invasive brain recordings, work on recovering continuous visual experiences in the form of videos is limited. In this work, we propose Mind-Video that learns spatiotemporal information from continuous fMRI data of the cerebral cortex progressively through masked brain modeling, multimodal contrastive learning with spatiotemporal attention, and co-training with an augmented Stable Diffusion model that incorporates network temporal inflation.
We show that high-quality videos of arbitrary frame rates can be reconstructed with Mind-Video using adversarial guidance. The recovered videos were evaluated with various semantic and pixel-level metrics. We achieved an average accuracy of 85% in semantic classification tasks and 0.19 in structural similarity index (SSIM), outperforming the previous state-of-the-art by 45%. We also show that our model is biologically plausible and interpretable, reflecting established physiological processes. | Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity | [
"Zijiao Chen",
"Jiaxin Qing",
"Juan Helen Zhou"
] | Conference | oral | 2305.11675 | [
""
] | https://huggingface.co/papers/2305.11675 | 2 | 1 | 1 | 3 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=i6mMWNcTfu | @inproceedings{
you2023shiftaddvit,
title={ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer},
author={Haoran You and Huihong Shi and Yipin Guo and Yingyan Celine Lin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=i6mMWNcTfu}
} | Vision Transformers (ViTs) have shown impressive performance and have become a unified backbone for multiple vision tasks. However, both the attention mechanism and multi-layer perceptrons (MLPs) in ViTs are not sufficiently efficient due to dense multiplications, leading to costly training and inference. To this end, we propose to reparameterize pre-trained ViTs with a mixture of multiplication primitives, e.g., bitwise shifts and additions, towards a new type of multiplication-reduced model, dubbed $\textbf{ShiftAddViT}$, which aims to achieve end-to-end inference speedups on GPUs without requiring training from scratch. Specifically, all $\texttt{MatMuls}$ among queries, keys, and values are reparameterized using additive kernels, after mapping queries and keys to binary codes in Hamming space. The remaining MLPs or linear layers are then reparameterized with shift kernels. We utilize TVM to implement and optimize those customized kernels for practical hardware deployment on GPUs. We find that such a reparameterization on (quadratic or linear) attention maintains model accuracy, while inevitably leading to accuracy drops when being applied to MLPs. To marry the best of both worlds, we further propose a new mixture of experts (MoE) framework to reparameterize MLPs by taking multiplication or its primitives as experts, e.g., multiplication and shift, and designing a new latency-aware load-balancing loss. Such a loss helps to train a generic router for assigning a dynamic amount of input tokens to different experts according to their latency. In principle, the faster the experts run, the more input tokens they are assigned. Extensive experiments on various 2D/3D Transformer-based vision tasks consistently validate the effectiveness of our proposed ShiftAddViT, achieving up to $\textbf{5.18$\times$}$ latency reductions on GPUs and $\textbf{42.9}$% energy savings, while maintaining a comparable accuracy as original or efficient ViTs. Codes and models are available at https://github.com/GATECH-EIC/ShiftAddViT. | ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer | [
"Haoran You",
"Huihong Shi",
"Yipin Guo",
"Yingyan Celine Lin"
] | Conference | poster | 2306.06446 | [
"https://github.com/gatech-eic/shiftaddvit"
] | https://huggingface.co/papers/2306.06446 | 1 | 1 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=i5sSWKbF3b | @inproceedings{
maros2023decentralized,
title={Decentralized Matrix Sensing: Statistical Guarantees and Fast Convergence},
author={Marie Maros and Gesualdo Scutari},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=i5sSWKbF3b}
} | We explore the matrix sensing problem from near-isotropic linear measurements, distributed across a network of agents modeled as an undirected graph, with no centralized node. We provide the first study of statistical, computational/communication guarantees for a decentralized gradient algorithm that solves the (nonconvex) Burer-Monteiro type decomposition associated to the low-rank matrix estimation. With small random initialization, the algorithm displays an approximate two-phase convergence: (i) a spectral phase that aligns the iterates' column space with the underlying low-rank matrix, mimicking centralized spectral initialization (not directly implementable over networks); and (ii) a local refinement phase that diverts the iterates from certain degenerate saddle points, while ensuring swift convergence to the underlying low-rank matrix. Central to our analysis is a novel "in-network" Restricted Isometry Property which accommodates for the decentralized nature of the optimization, revealing an intriguing interplay between sample complexity and network connectivity, topology, and communication complexity. | Decentralized Matrix Sensing: Statistical Guarantees and Fast Convergence | [
"Marie Maros",
"Gesualdo Scutari"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=i39yXaUKuF | @inproceedings{
liu2023segment,
title={Segment Any Point Cloud Sequences by Distilling Vision Foundation Models},
author={Youquan Liu and Lingdong Kong and Jun CEN and Runnan Chen and Wenwei Zhang and Liang Pan and Kai Chen and Ziwei Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=i39yXaUKuF}
} | Recent advancements in vision foundation models (VFMs) have opened up new possibilities for versatile and efficient visual perception. In this work, we introduce Seal, a novel framework that harnesses VFMs for segmenting diverse automotive point cloud sequences. Seal exhibits three appealing properties: i) Scalability: VFMs are directly distilled into point clouds, obviating the need for annotations in either 2D or 3D during pretraining. ii) Consistency: Spatial and temporal relationships are enforced at both the camera-to-LiDAR and point-to-segment regularization stages, facilitating cross-modal representation learning. iii) Generalizability: Seal enables knowledge transfer in an off-the-shelf manner to downstream tasks involving diverse point clouds, including those from real/synthetic, low/high-resolution, large/small-scale, and clean/corrupted datasets. Extensive experiments conducted on eleven different point cloud datasets showcase the effectiveness and superiority of Seal. Notably, Seal achieves a remarkable 45.0% mIoU on nuScenes after linear probing, surpassing random initialization by 36.9% mIoU and outperforming prior arts by 6.1% mIoU. Moreover, Seal demonstrates significant performance gains over existing methods across 20 different few-shot fine-tuning tasks on all eleven tested point cloud datasets. The code is available at this link. | Segment Any Point Cloud Sequences by Distilling Vision Foundation Models | [
"Youquan Liu",
"Lingdong Kong",
"Jun CEN",
"Runnan Chen",
"Wenwei Zhang",
"Liang Pan",
"Kai Chen",
"Ziwei Liu"
] | Conference | spotlight | 2306.09347 | [
"https://github.com/xiaoaoran/SynLiDAR"
] | https://huggingface.co/papers/2306.09347 | 1 | 1 | 0 | 8 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=i2H2sEiq2T | @inproceedings{
kong2023a,
title={A Unified Fast Gradient Clipping Framework for {DP}-{SGD}},
author={Weiwei Kong and Andres Munoz medina},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=i2H2sEiq2T}
} | A well-known numerical bottleneck in the differentially-private stochastic gradient descent (DP-SGD) algorithm is the computation of the gradient norm for each example in a large input batch. When the loss function in DP-SGD is consists of an intermediate linear operation, existing methods in the literature have proposed decompositions of gradients that are amenable to fast norm computations. In this paper, we present a framework that generalizes the above approach to arbitrary (possibly nonlinear) intermediate operations. Moreover, we show that for certain operations, such as fully-connected and embedding layer computations, further improvements to the runtime and storage costs of existing decompositions can be deduced using certain components of our framework. Finally, preliminary numerical experiments are given to demonstrate the substantial effects of the aforementioned improvements. | A Unified Fast Gradient Clipping Framework for DP-SGD | [
"Weiwei Kong",
"Andres Munoz medina"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=i28zCSsQIc | @inproceedings{
beugnot2023gloptinets,
title={GloptiNets: Scalable Non-Convex Optimization with Certificates},
author={Gaspard Beugnot and Julien Mairal and Alessandro Rudi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=i28zCSsQIc}
} | We present a novel approach to non-convex optimization with certificates, which handles smooth functions on the hypercube or on the torus. Unlike traditional methods that rely on algebraic properties, our algorithm exploits the regularity of the target function intrinsic in the decay of its Fourier spectrum. By defining a tractable family of models, we allow {\em at the same time} to obtain precise certificates and to leverage the advanced and powerful computational techniques developed to optimize neural networks. In this way the scalability of our approach is naturally enhanced by parallel computing with GPUs. Our approach, when applied to the case of polynomials of moderate dimensions but with thousands of coefficients, outperforms the state-of-the-art optimization methods with certificates, as the ones based on Lasserre's hierarchy, addressing problems intractable for the competitors. | GloptiNets: Scalable Non-Convex Optimization with Certificates | [
"Gaspard Beugnot",
"Julien Mairal",
"Alessandro Rudi"
] | Conference | spotlight | 2306.14932 | [
"https://github.com/gaspardbb/gloptinets.jl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=i0OmcF14Kf | @inproceedings{
wang2023statespace,
title={State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory},
author={Shida Wang and Beichen Xue},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=i0OmcF14Kf}
} | State-space models have gained popularity in sequence modelling due to their simple and efficient network structures. However, the absence of nonlinear activation along the temporal direction limits the model's capacity. In this paper, we prove that stacking state-space models with layer-wise nonlinear activation is sufficient to approximate any continuous sequence-to-sequence relationship. Our findings demonstrate that the addition of layer-wise nonlinear activation enhances the model's capacity to learn complex sequence patterns. Meanwhile, it can be seen both theoretically and empirically that the state-space models do not fundamentally resolve the issue of exponential decaying memory. Theoretical results are justified by numerical verifications. | State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory | [
"Shida Wang",
"Beichen Xue"
] | Conference | poster | 2309.13414 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=hzND3ZEFg2 | @inproceedings{
hong2023learning,
title={Learning to Influence Human Behavior with Offline Reinforcement Learning},
author={Joey Hong and Sergey Levine and Anca Dragan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hzND3ZEFg2}
} | When interacting with people, AI agents do not just influence the state of the world -- they also influence the actions people take in response to the agent, and even their underlying intentions and strategies. Accounting for and leveraging this influence has mostly been studied in settings where it is sufficient to assume that human behavior is near-optimal: competitive games, or general-sum settings like autonomous driving alongside human drivers. Instead, we focus on influence in settings where there is a need to capture human suboptimality. For instance, imagine a collaborative task in which, due either to cognitive biases or lack of information, people do not perform very well -- how could an agent influence them towards more optimal behavior? Assuming near-optimal human behavior will not work here, and so the agent needs to learn from real human data. But experimenting online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical. Hence, we focus on learning from an offline dataset of human-human interactions. Our observation is that offline reinforcement learning (RL) can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior. We demonstrate that offline RL can solve two challenges with effective influence. First, we show that by learning from a dataset of suboptimal human-human interaction on a variety of tasks -- none of which contains examples of successful influence -- an agent can learn influence strategies to steer humans towards better performance even on new tasks. Second, we show that by also modeling and conditioning on human behavior, offline RL can learn to affect not just the human's actions but also their underlying strategy, and adapt to changes in their strategy. | Learning to Influence Human Behavior with Offline Reinforcement Learning | [
"Joey Hong",
"Sergey Levine",
"Anca Dragan"
] | Conference | poster | 2303.02265 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=hz33V7Tb2O | @inproceedings{
kang2023clear,
title={{CL}e{AR}: Continual Learning on Algorithmic Reasoning for Human-like Intelligence},
author={Bong Gyun Kang and HyunGi Kim and Dahuin Jung and Sungroh Yoon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hz33V7Tb2O}
} | Continual learning (CL) aims to incrementally learn multiple tasks that are presented sequentially. The significance of CL lies not only in the practical importance but also in studying the learning mechanisms of humans who are excellent continual learners. While most research on CL has been done on structured data such as images, there is a lack of research on CL for abstract logical concepts such as counting, sorting, and arithmetic, which humans learn gradually over time in the real world. In this work, for the first time, we introduce novel algorithmic reasoning (AR) methodology for continual tasks of abstract concepts: CLeAR. Our methodology proposes a one-to-many mapping of input distribution to a shared mapping space, which allows the alignment of various tasks of different dimensions and shared semantics. Our tasks of abstract logical concepts, in the form of formal language, can be classified into Chomsky hierarchies based on their difficulty. In this study, we conducted extensive experiments consisting of 15 tasks with various levels of Chomsky hierarchy, ranging from in-hierarchy to inter-hierarchy scenarios. CLeAR not only achieved near zero forgetting but also improved accuracy during following tasks, a phenomenon known as backward transfer, while previous CL methods designed for image classification drastically failed. | CLeAR: Continual Learning on Algorithmic Reasoning for Human-like Intelligence | [
"Bong Gyun Kang",
"HyunGi Kim",
"Dahuin Jung",
"Sungroh Yoon"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=hz10oiVMNE | @inproceedings{
srinivasa2023cwcl,
title={{CWCL}: Cross-Modal Transfer with Continuously Weighted Contrastive Loss},
author={Rakshith Sharma Srinivasa and Jaejin Cho and Chouchang Yang and Yashas Malur Saidutta and Ching-Hua Lee and Yilin Shen and Hongxia Jin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hz10oiVMNE}
} | This paper considers contrastive training for cross-modal 0-shot transfer wherein a pre-trained model in one modality is used for representation learning in another domain using pairwise data. The learnt models in the latter domain can then be used for a diverse set of tasks in a 0-shot way, similar to Contrastive Language-Image Pre-training (CLIP) and Locked-image Tuning (LiT) that have recently gained considerable attention. Classical contrastive training employs sets of positive and negative examples to align similar and repel dissimilar training data samples. However, similarity amongst training examples has a more continuous nature, thus calling for a more `non-binary' treatment. To address this, we propose a new contrastive loss function called Continuously Weighted Contrastive Loss (CWCL) that employs a continuous measure of similarity. With CWCL, we seek to transfer the structure of the embedding space from one modality to another. Owing to the continuous nature of similarity in the proposed loss function, these models outperform existing methods for 0-shot transfer across multiple models, datasets and modalities. By using publicly available datasets, we achieve 5-8% (absolute) improvement over previous state-of-the-art methods in 0-shot image classification and 20-30% (absolute) improvement in 0-shot speech-to-intent classification and keyword classification. | CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss | [
"Rakshith Sharma Srinivasa",
"Jaejin Cho",
"Chouchang Yang",
"Yashas Malur Saidutta",
"Ching-Hua Lee",
"Yilin Shen",
"Hongxia Jin"
] | Conference | poster | 2309.14580 | [
""
] | https://huggingface.co/papers/2309.14580 | 1 | 0 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=hyPUZX03Ks | @inproceedings{
fiquet2023a,
title={A polar prediction model for learning to represent visual transformations},
author={Pierre-{\'E}tienne H Fiquet and Eero P Simoncelli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hyPUZX03Ks}
} | All organisms make temporal predictions, and their evolutionary fitness level depends on the accuracy of these predictions. In the context of visual perception, the motions of both the observer and objects in the scene structure the dynamics of sensory signals, allowing for partial prediction of future signals based on past ones. Here, we propose a self-supervised representation-learning framework that extracts and exploits the regularities of natural videos to compute accurate predictions. We motivate the polar architecture by appealing to the Fourier shift theorem and its group-theoretic generalization, and we optimize its parameters on next-frame prediction. Through controlled experiments, we demonstrate that this approach can discover the representation of simple transformation groups acting in data. When trained on natural video datasets, our framework achieves better prediction performance than traditional motion compensation and rivals conventional deep networks, while maintaining interpretability and speed. Furthermore, the polar computations can be restructured into components resembling normalized simple and direction-selective complex cell models of primate V1 neurons. Thus, polar prediction offers a principled framework for understanding how the visual system represents sensory inputs in a form that simplifies temporal prediction. | A polar prediction model for learning to represent visual transformations | [
"Pierre-Étienne H Fiquet",
"Eero P Simoncelli"
] | Conference | poster | 2303.03432 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=hxJu0386if | @inproceedings{
wang2023focus,
title={Focus on Query: Adversarial Mining Transformer for Few-Shot Segmentation},
author={Yuan Wang and Naisong Luo and Tianzhu Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hxJu0386if}
} | Few-shot segmentation (FSS) aims to segment objects of new categories given only a handful of annotated samples. Previous works focus their efforts on exploring the support information while paying less attention to the mining of the critical query branch. In this paper, we rethink the importance of support information and propose a new query-centric FSS model Adversarial Mining Transformer (AMFormer), which achieves accurate query image segmentation with only rough support guidance or even weak support labels. The proposed AMFormer enjoys several merits. First, we design an object mining transformer (G) that can achieve the expansion of incomplete region activated by support clue, and a detail mining transformer (D) to discriminate the detailed local difference between the expanded mask and the ground truth. Second, we propose to train G and D via an adversarial process, where G is optimized to generate more accurate masks approaching ground truth to fool D. We conduct extensive experiments on commonly used Pascal-5i and COCO-20i benchmarks and achieve state-of-the-art results across all settings. In addition, the decent performance with weak support labels in our query-centric paradigm may inspire the development of more general FSS models. | Focus on Query: Adversarial Mining Transformer for Few-Shot Segmentation | [
"Yuan Wang",
"Naisong Luo",
"Tianzhu Zhang"
] | Conference | poster | 2311.17626 | [
"https://github.com/wyxdm/amnet"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=hwjmEZ8561 | @inproceedings{
vilas2023analyzing,
title={Analyzing Vision Transformers for Image Classification in Class Embedding Space},
author={Martina G. Vilas and Timothy Schauml{\"o}ffel and Gemma Roig},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=hwjmEZ8561}
} | Despite the growing use of transformer models in computer vision, a mechanistic understanding of these networks is still needed. This work introduces a method to reverse-engineer Vision Transformers trained to solve image classification tasks. Inspired by previous research in NLP, we demonstrate how the inner representations at any level of the hierarchy can be projected onto the learned class embedding space to uncover how these networks build categorical representations for their predictions. We use our framework to show how image tokens develop class-specific representations that depend on attention mechanisms and contextual information, and give insights on how self-attention and MLP layers differentially contribute to this categorical composition. We additionally demonstrate that this method (1) can be used to determine the parts of an image that would be important for detecting the class of interest, and (2) exhibits significant advantages over traditional linear probing approaches. Taken together, our results position our proposed framework as a powerful tool for mechanistic interpretability and explainability research. | Analyzing Vision Transformers for Image Classification in Class Embedding Space | [
"Martina G. Vilas",
"Timothy Schaumlöffel",
"Gemma Roig"
] | Conference | poster | 2310.18969 | [
"https://github.com/martinagvilas/vit-cls_emb"
] | https://huggingface.co/papers/2310.18969 | 1 | 0 | 0 | 3 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=huh0XmSdBK | @inproceedings{
jha2023npcl,
title={{NPCL}: Neural Processes for Uncertainty-Aware Continual Learning},
author={Saurav Jha and Dong Gong and He Zhao and Lina Yao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=huh0XmSdBK}
} | Continual learning (CL) aims to train deep neural networks efficiently on streaming data while limiting the forgetting caused by new tasks. However, learning transferable knowledge with less interference between tasks is difficult, and real-world deployment of CL models is limited by their inability to measure predictive uncertainties. To address these issues, we propose handling CL tasks with neural processes (NPs), a class of meta-learners that encode different tasks into probabilistic distributions over functions all while providing reliable uncertainty estimates. Specifically, we propose an NP-based CL approach (NPCL) with task-specific modules arranged in a hierarchical latent variable model. We tailor regularizers on the learned latent distributions to alleviate forgetting. The uncertainty estimation capabilities of the NPCL can also be used to handle the task head/module inference challenge in CL. Our experiments show that the NPCL outperforms previous CL approaches. We validate the effectiveness of uncertainty estimation in the NPCL for identifying novel data and evaluating instance-level model confidence. Code is available at https://github.com/srvCodes/NPCL. | NPCL: Neural Processes for Uncertainty-Aware Continual Learning | [
"Saurav Jha",
"Dong Gong",
"He Zhao",
"Lina Yao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=htkdwc6jDB | @inproceedings{
klede2023pvalue,
title={\$p\$-value Adjustment for Monotonous, Unbiased, and Fast Clustering Comparison},
author={Kai Klede and Thomas Altstidl and Dario Zanca and Bjoern Eskofier},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=htkdwc6jDB}
} | Popular metrics for clustering comparison, like the Adjusted Rand Index and the Adjusted Mutual Information, are type II biased. The Standardized Mutual Information removes this bias but suffers from counterintuitive non-monotonicity and poor computational efficiency. We introduce the $p$-value adjusted Rand Index ($\operatorname{PMI}_2$), the first cluster comparison method that is type II unbiased and provably monotonous. The $\operatorname{PMI}_2$ has fast approximations that outperform the Standardized Mutual information. We demonstrate its unbiased clustering selection, approximation quality, and runtime efficiency on synthetic benchmarks. In experiments on image and social network datasets, we show how the $\operatorname{PMI}_2$ can help practitioners choose better clustering and community detection algorithms. | p-value Adjustment for Monotonous, Unbiased, and Fast Clustering Comparison | [
"Kai Klede",
"Thomas Altstidl",
"Dario Zanca",
"Bjoern Eskofier"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |