bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
848
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
34
⌀ | id
stringclasses 44
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 899
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
109
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
19
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=Ejw5zOOgLp | @inproceedings{
paul2023an,
title={An Energy Based Model for Incorporating Sequence Priors for Target-Specific Antibody Design},
author={Steffanie Paul and Yining Huang and Debora Marks},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=Ejw5zOOgLp}
} | With the growing demand for antibody therapeutics, there is a great need for computational methods to accelerate antibody discovery and optimization. Advances in machine learning on graphs have been leveraged to develop generative models of antibody sequence and structure that condition on specific antigen epitopes. However, the data availability for training models on structure (∼5k antibody binding complexes Schneider et al. [2022]) is dwarfed by the amount of antibody sequence data available (> 550M sequences Olsen et al. [2022]) which have been used to train protein language models useful for antibody generation and optimization Here we motivate the combination of well-trained antibody sequence models and graph generative models on target structures to enhance their performance for target-conditioned antibody design. First, we present the results of an investigation into the sitewise design performance of popular target-conditioned design models. We show that target-conditioned models may not be incorporating target information into the generation of middle loop residues of the complementarity-determining region of the antibody sequence. Next, we propose an energy-based model framework designed to encourage a model to learn target-specific information by supplementing it with pre-trained marginal-sequence information. We present preliminary results on the development of this model and outline future steps to improve the model framework. | An Energy Based Model for Incorporating Sequence Priors for Target-Specific Antibody Design | [
"Yining Huang",
"Steffanie Paul",
"Debora Marks"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=EKt4NQZ47U | @inproceedings{
park2023preference,
title={Preference Optimization for Molecular Language Models},
author={Ryan Park and Ryan Theisen and Rayees Rahman and Anna Cicho{\'n}ska and Marcel Patek and Navriti Sahni},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=EKt4NQZ47U}
} | Molecular language modeling is an effective approach to generating novel chemical structures. However, these models do not \emph{a priori} encode certain preferences a chemist may desire. We investigate the use of fine-tuning using Direct Preference Optimization to better align generated molecules with chemist preferences. Our findings suggest that this approach is simple, efficient, and highly effective. | Preference Optimization for Molecular Language Models | [
"Ryan Park",
"Ryan Theisen",
"Rayees Rahman",
"Anna Cichońska",
"Marcel Patek",
"Navriti Sahni"
] | Workshop/GenBio | 2310.12304 | [
"https://github.com/harmonic-discovery/pref-opt-for-mols"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=E3HN48zjam | @inproceedings{
ghorbani2023autoregressive,
title={Autoregressive fragment-based diffusion for pocket-aware ligand design},
author={Mahdi Ghorbani and Leo Gendelev and Paul Beroza and Michael Keiser},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=E3HN48zjam}
} | In this work, we introduce AutoFragDiff, a fragment-based autoregressive diffusion model for generating 3D molecular structures conditioned on target protein structures. We employ geometric vector perceptrons to predict atom types and spatial coordinates of new molecular fragments conditioned on molecular scaffolds and protein pockets. Our approach improves the local geometry of the resulting 3D molecules while maintaining high predicted binding affinity to protein targets. The model can also perform scaffold extension from user-provided starting molecular scaffold. | Autoregressive fragment-based diffusion for pocket-aware ligand design | [
"Mahdi Ghorbani",
"Leo Gendelev",
"Paul Beroza",
"Michael Keiser"
] | Workshop/GenBio | 2401.05370 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=DpbMk2KOOX | @inproceedings{
hwang2023genomic,
title={Genomic language model predicts protein co-regulation and function},
author={Yunha Hwang and Andre Cornman and Sergey Ovchinnikov and Peter Girguis},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=DpbMk2KOOX}
} | Deciphering the relationship between a gene and its genomic context is fundamental to understanding and engineering biological systems. Machine learning has shown promise in learning latent relationships underlying the sequence-structure-function paradigm from massive protein sequence datasets; However, to date, limited attempts have been made in extending this continuum to include higher order genomic context information. Here, we trained a genomic language model (gLM) on millions of metagenomic scaffolds to learn the latent functional and regulatory relationships between genes. gLM learns contextualized protein embeddings that capture the genomic context as well as the protein sequence itself, and appears to encode biologically meaningful and functionally relevant information (e.g. enzymatic function). Our analysis of the attention patterns demonstrates that gLM is learning co-regulated functional modules (i.e. operons). Our findings illustrate that gLM’s unsupervised deep learning of the metagenomic corpus is an effective and promising approach to encode functional semantics and regulatory syntax of genes in their genomic contexts and uncover complex relationships between genes in a genomic region. | Genomic language model predicts protein co-regulation and function | [
"Yunha Hwang",
"Andre Cornman",
"Sergey Ovchinnikov",
"Peter Girguis"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=DUjUJCqqA7 | @inproceedings{
lee2023finetuning,
title={Fine-tuning protein Language Models by ranking protein fitness},
author={Minji Lee and Kyungmin Lee and Jinwoo Shin},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=DUjUJCqqA7}
} | The self-supervised protein language models (pLMs) have demonstrated significant potential in predicting the impact of mutations on protein function and fitness, which is crucial for protein design. There are approaches to further condition pLM to language or multiple sequence alignment (MSA) to produce a protein of a specific family or function. However, most of those conditioning is too coarse-grained to express the function, and still exhibit a weak correlation to fitness and struggle to generate fit variants. To address this challenge, we propose a fine-tuning framework for pLM to align it to a specific fitness by ranking the mutants. We show that constructing the ranked pairs is crucial in fine-tuning pLMs, where we provide a simple yet effective method to improve fitness prediction across various datasets. Through experiments on ProteinGym, our method shows substantial improvements in the fitness prediction tasks even using less than 200 labeled data. Furthermore, we demonstrate that our approach excels in fitness optimization tasks. | Fine-tuning protein Language Models by ranking protein fitness | [
"Minji Lee",
"Kyungmin Lee",
"Jinwoo Shin"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=D6PJjvaE3D | @inproceedings{
chen2023amalga,
title={Amalga: Designable Protein Backbone Generation with Folding and Inverse Folding Guidance},
author={Shugao Chen and Ziyao Li and xiangxiang Zeng and Guolin Ke},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=D6PJjvaE3D}
} | Recent advances in deep learning enable new approaches to protein design through inverse folding and backbone generation. However, backbone generators may produce structures that inverse folding struggles to identify sequences for, indicating designability issues. We propose Amalga, an inference-time technique that enhances designability of backbone generators. Amalga leverages folding and inverse folding models to guide backbone generation towards more designable conformations by incorporating ``folded-from-inverse-folded'' (FIF) structures. To generate FIF structures, possible sequences are predicted from step-wise predictions in the reverse diffusion and further folded into new backbones. Being intrinsically designable, the FIF structures guide the generated backbones to a more designable distribution. Experiments on both de novo design and motif-scaffolding demonstrate improved designability and diversity with Amalga on RFdiffusion. | Amalga: Designable Protein Backbone Generation with Folding and Inverse Folding Guidance | [
"Shugao Chen",
"Ziyao Li",
"xiangxiang Zeng",
"Guolin Ke"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=CsjGuWD7hk | @inproceedings{
manshour2023integrating,
title={Integrating Protein Structure Prediction and Bayesian Optimization for Peptide Design},
author={Negin Manshour and Fei He and Duolin Wang and Dong Xu},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=CsjGuWD7hk}
} | Peptide design, with the goal of identifying peptides possessing unique biological properties, stands as a crucial challenge in peptide-based drug discovery. While traditional and computational methods have made significant strides, they often encounter hurdles due to the complexities and costs of laboratory experiments. Recent advancements in deep learning and Bayesian Optimization have paved the way for innovative research in this domain. In this context, our study presents a novel approach that effectively combines protein structure prediction with Bayesian Optimization for peptide design. By applying carefully designed objective functions, we guide and enhance the optimization trajectory for new peptide sequences. Benchmarked against multiple native structures, our methodology is tailored to generate new peptides to their optimal potential biological properties. | Integrating Protein Structure Prediction and Bayesian Optimization for Peptide Design | [
"Negin Manshour",
"Fei He",
"Duolin Wang",
"Dong Xu"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=ChU7MCLk1J | @inproceedings{
gong2023binding,
title={Binding Oracle: Fine-Tuning From Stability to Binding Free Energy},
author={Chengyue Gong and Adam Klivans and Jordan Wells and James Loy and qiang liu and Alex Dimakis and Daniel Diaz},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=ChU7MCLk1J}
} | The ability to predict changes in binding free energy (▵▵$G_{bind}$) for mutations at protein-protein interfaces (PPIs) is critical for the understanding genetic diseases and engineering novel protein-based therapeutics. Here, we present Binding Oracle: a structure-based graph transformer for predicting ▵▵$G_{bind}$ at PPIs. Binding Oracle fine-tunes Stability Oracle with Selective LoRA: a technique that synergizes layer selection via gradient norms with LoRA. Selective LoRA enables the identification and fine-tuning of the layers most critical for the downstream task, thus, regularizing against overfitting. Additionally, we present new training-test splits of mutational data from the SKEMPI2.0, Ab-Bind, and NABE databases that use a strict 30\% sequence similarity threshold to avoid data leakage during model evaluation. Binding Oracle, when trained with the Thermodynamic Permutations data augmentation technique , achieves SOTA on S487 without using any evolutionary auxiliary features. Our results empirically demonstrate how sparse fine-tuning techniques, such as Selective LoRA, can enable rapid domain adaptation in protein machine learning frameworks. | Binding Oracle: Fine-Tuning From Stability to Binding Free Energy | [
"Chengyue Gong",
"Adam Klivans",
"Jordan Wells",
"James Loy",
"qiang liu",
"Alex Dimakis",
"Daniel Diaz"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=CKCNCW9wxB | @inproceedings{
swanson2023generative,
title={Generative {AI} for designing and validating easily synthesizable and structurally novel antibiotics},
author={Kyle Swanson and Gary Liu and Denise Catacutan and James Zou and Jonathan Stokes},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=CKCNCW9wxB}
} | The rise of pan-resistant bacteria is creating an urgent need for structurally novel antibiotics. AI methods can discover new antibiotics, but existing methods have significant limitations. Property prediction models, which evaluate molecules one-by-one for a given property, scale poorly to large chemical spaces. Generative models, which directly design molecules, rapidly explore vast chemical spaces but generate molecules that are challenging to synthesize. Here, we introduce SyntheMol, a generative model that designs easily synthesizable compounds from a chemical space of 30 billion molecules. We apply SyntheMol to design molecules that inhibit the growth of Acinetobacter baumannii, a burdensome bacterial pathogen. We synthesize 58 generated molecules and experimentally validate them, with six structurally novel molecules demonstrating potent activity against A. baumannii and several other phylogenetically diverse bacterial pathogens. | Generative AI for designing and validating easily synthesizable and structurally novel antibiotics | [
"Kyle Swanson",
"Gary Liu",
"Denise Catacutan",
"James Zou",
"Jonathan Stokes"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=BE2hok0lES | @inproceedings{
alcaide2023umdfit,
title={{UMD}-fit: Generating Realistic Ligand Conformations for Distance-Based Deep Docking Models},
author={Eric Alcaide and Ziyao Li and Hang Zheng and Zhifeng Gao and Guolin Ke},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=BE2hok0lES}
} | Recent advances in deep learning have enabled fast and accurate prediction of protein-ligand binding poses through methods such as Uni-Mol Docking . These techniques utilize deep neural networks to predict interatomic distances between proteins and ligands. Subsequently, ligand conformations are generated to satisfy the predicted distance constraints. However, directly optimizing atomic coordinates often results in distorted, and thus invalid, ligand geometries; which are disastrous in actual drug development. We introduce UMD-fit as a practical solution to this problem applicable to all distance-based methods. We demonstrate it as an improvement to Uni-Mol Docking , which retains the overall distance prediction pipeline while optimizing ligand positions, orientations, and torsion angles instead. Experimental evidence shows that UMD-fit resolves the vast majority of invalid conformation issues while maintaining accuracy. | UMD-fit: Generating Realistic Ligand Conformations for Distance-Based Deep Docking Models | [
"Eric Alcaide",
"Ziyao Li",
"Hang Zheng",
"Zhifeng Gao",
"Guolin Ke"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=AlPg6if5PU | @inproceedings{
guo2023diffdocksite,
title={DiffDock-Site: A Novel Paradigm for Enhanced Protein-Ligand Predictions through Binding Site Identification},
author={Huanlei Guo and Song Liu and Mingdi HU and Yilun Lou and Bingyi Jing},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=AlPg6if5PU}
} | In the realm of computational drug discovery, molecular docking and ligand-binding site (LBS) identification stand as pivotal contributors, often influencing the direction of innovative drug development. DiffDock, a state-of-the-art method, is renowned for its molecular docking capabilities harnessing diffusion mechanisms. However, its computational demands, arising from its extensive score model designed to cater to a broad dynamic range for denoising score matching, can be challenging. To address this problem, we present DiffDock-Site, a novel paradigm that integrates the precision of PointSite for identifying and initializing the docking pocket. This two-stage strategy then refines the ligand's position, orientation, and rotatable bonds using a more concise score model than traditional DiffDock. By emphasizing the dynamic range around the pinpointed pocket center, our approach dramatically elevates both efficiency and accuracy in molecular docking. We achieve a substantial reduction in mean RMSD and centroid distance, from 7.5 to 5.2 and 5.5 to 2.9, respectively. Remarkably, our approach delivers these precision gains using only 1/6 of the model parameters and expends just 1/13 of the training time, underscoring its unmatched combination of computational efficiency and predictive accuracy. | DiffDock-Site: A Novel Paradigm for Enhanced Protein-Ligand Predictions through Binding Site Identification | [
"Huanlei Guo",
"Song Liu",
"Mingdi HU",
"Yilun Lou",
"Bingyi Jing"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=ALsSka1db3 | @inproceedings{
pedawi2023through,
title={Through the looking glass: navigating in latent space to optimize over combinatorial synthesis libraries},
author={Aryan Pedawi and Saulo De Oliveira and Henry van den Bedem},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=ALsSka1db3}
} | Commercially available, synthesis-on-demand virtual libraries contain trillions of readily synthesizable compounds and can serve as a bridge between _in silico_ property optimization and _in vitro_ validation. However, as these libraries continue to grow exponentially in size, traditional enumerative search strategies that scale linearly with the number of compounds encounter significant limitations. Hierarchical enumeration approaches scale more gracefully in library size, but are inherently greedy and implicitly rest on an additivity assumption of the molecular property with respect to its sub-components. In this work, we present a reinforcement learning approach to retrieving compounds from ultra-large libraries that satisfy a set of user-specified constraints. Along the way, we derive what we believe to be a new family of $\alpha$-divergences that may be of general interest in density estimation. Our method first trains a library-constrained generative model over a virtual library and subsequently trains a normalizing flow to learn a distribution over latent space that decodes constraint-satisfying compounds. The proposed approach naturally accommodates specification of multiple molecular property constraints and requires only black box access to the molecular property functions, thereby supporting a broad class of search problems over these libraries. | Through the looking glass: navigating in latent space to optimize over combinatorial synthesis libraries | [
"Aryan Pedawi",
"Saulo De Oliveira",
"Henry van den Bedem"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=9BQ3l8OVru | @inproceedings{
ghari2023generative,
title={Generative Flow Networks Assisted Biological Sequence Editing},
author={Pouya M. Ghari and Alex Tseng and G{\"o}kcen Eraslan and Romain Lopez and Tommaso Biancalani and Gabriele Scalia and Ehsan Hajiramezanali},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=9BQ3l8OVru}
} | Editing biological sequences has extensive applications in synthetic biology and medicine, such as designing regulatory elements for nucleic-acid therapeutics and treating genetic disorders. The primary objective in biological-sequence editing is to determine the optimal modifications to a sequence which augment certain biological properties while adhering to a minimal number of alterations to ensure safety and predictability. In this paper, we propose GFNSeqEditor, a novel biological-sequence editing algorithm which builds on the recently proposed area of generative flow networks (GFlowNets). Our proposed GFNSeqEditor identifies elements within a starting seed sequence that may compromise a desired biological property. Then, using a learned stochastic policy, the algorithm makes edits at these identified locations, offering diverse modifications for each sequence in order to enhance the desired property. Notably, GFNSeqEditor prioritizes edits with a higher likelihood of substantially improving the desired property. Furthermore, the number of edits can be regulated through specific hyperparameters. We conducted extensive experiments on a range of real-world datasets and biological applications, and our results underscore the superior performance of our proposed algorithm compared to existing state-of-the-art sequence editing methods. | Generative Flow Networks Assisted Biological Sequence Editing | [
"Pouya M. Ghari",
"Alex Tseng",
"Gökcen Eraslan",
"Romain Lopez",
"Tommaso Biancalani",
"Gabriele Scalia",
"Ehsan Hajiramezanali"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=8PbTU4exnV | @inproceedings{
paul2023combining,
title={Combining Structure and Sequence for Superior Fitness Prediction},
author={Steffanie Paul and Aaron Kollasch and Pascal Notin and Debora Marks},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=8PbTU4exnV}
} | Deep generative models of protein sequence and inverse folding models have shown great promise as protein design methods. While sequence-based models have shown strong zero-shot mutation effect prediction performance, inverse folding models have not been extensively characterized in this way. As these models use information from protein structures, it is likely that inverse folding models possess inductive biases that make them better predictors of certain function types. Using the collection of model scores contained in the newly updated ProteinGym, we systematically explore the differential zero-shot predictive power of sequence and inverse folding models. We find that inverse folding models consistently outperform the best-in-class sequence models on assays of protein thermostability, but have lower performance on other properties. Motivated by these findings, we develop StructSeq, an ensemble model combining information from sequence, multiple sequence alignments (MSAs), and structure. StructSeq achieves state-of-the-art Spearman correlation on ProteinGym and is robust to different functional assay types. | Combining Structure and Sequence for Superior Fitness Prediction | [
"Steffanie Paul",
"Aaron Kollasch",
"Pascal Notin",
"Debora Marks"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=73wnK2BvWg | @inproceedings{
larsen2023improving,
title={Improving Precision in Language Models Learning from Invalid Samples},
author={Niels Larsen and Giorgio Giannone and Ole Winther and Kai Blin},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=73wnK2BvWg}
} | Language Models are powerful generative tools capable of learning intricate patterns from vast amounts of unstructured data.
Nevertheless, in domains that demand precision, such as science and engineering, the primary objective is to obtain an exact and accurate answer. Precision takes precedence in these contexts. In specialized tasks like chemical compound generation, the emphasis is on output accuracy rather than response diversity. Traditional self-refinement methods are ineffective for such domain-specific input/output pairs, unlike general language tasks. In this study, we introduce invalid2valid, a powerful and general post-processing mechanism that can significantly enhance precision in language models for input/output tasks spanning different domains and specialized applications. | Improving Precision in Language Models Learning from Invalid Samples | [
"Niels Larsen",
"Giorgio Giannone",
"Ole Winther",
"Kai Blin"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=6NtRll9VdH | @inproceedings{
nagaraj2023machine,
title={Machine learning derived embeddings of bulk multi-omics data enable clinically significant representations in a pan-cancer cohort},
author={Sanjay Nagaraj and ZACHARY MCCAW and Theofanis Karaletsos and Daphne Koller and Anna Shcherbina},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=6NtRll9VdH}
} | Bulk multiomics data provides a comprehensive view of tissue biology, but datasets rarely contain matched transcriptomics and chromatin accessibility data for a given sample. Furthermore, it is difficult to identify relevant genetic signatures from the high-dimensional, sparse representations provided by omics modalities. Machine learning (ML) models have the ability to extract dense, information-rich, denoised representations from omics data, which facilitate finding novel genetic signatures. To this end, we develop and compare generative ML models through an evaluation framework that examines the biological and clinical relevance of the underlying latent embeddings produced. We focus our analysis on pan-cancer multiomics data from a set of 21 diverse cancer metacohorts across three datasets. We additionally investigate if our framework can generate robust representations from oncology imaging modalities (i.e. histopathology slides). Our best performing models learn clinical and biological signals and show improved performance over traditional baselines in our evaluations, including overall survival prediction. | Machine learning derived embeddings of bulk multi-omics data enable clinically significant representations in a pan-cancer cohort | [
"Sanjay Nagaraj",
"ZACHARY MCCAW",
"Theofanis Karaletsos",
"Daphne Koller",
"Anna Shcherbina"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4k926QVVM4 | @inproceedings{
ngo2023targetaware,
title={Target-Aware Variational Auto-Encoders for Ligand Generation with Multi-Modal Protein Modeling},
author={Khang Ngo and Truong Son Hy},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=4k926QVVM4}
} | Without knowledge of specific pockets, generating ligands based on the global structure of a protein target plays a crucial role in drug discovery as it helps reduce the search space for potential drug-like candidates in the pipeline. However, contemporary methods require optimizing tailored networks for each protein, which is arduous and costly. To address this issue, we introduce TargetVAE, a target-aware variational auto-encoder that generates ligands with high binding affinities to arbitrary protein targets, guided by a novel prior network that learns from entire protein structures. We showcase the superiority of our approach by conducting extensive experiments and evaluations, including the assessment of generative model quality, ligand generation for unseen targets, docking score computation, and binding affinity prediction. Empirical results demonstrate the promising performance of our proposed approach. Our source code in PyTorch is publicly available at https://github.com/HySonLab/Ligand_Generation | Target-Aware Variational Auto-Encoders for Ligand Generation with Multi-Modal Protein Modeling | [
"Khang Ngo",
"Truong Son Hy"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4HQtWpQ4WG | @inproceedings{
zhang2023topodiff,
title={TopoDiff: Improve Protein Backbone Generation with Topology-aware Latent Encoding},
author={Yuyang Zhang and Zinnia Ma and Haipeng Gong},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=4HQtWpQ4WG}
} | The $\textit{de novo}$ design of protein structures is an intriguing research topic in the field of protein engineering. Recent breakthroughs in diffusion-based generative models have demonstrated substantial promise in tackling this task, notably in the generation of diverse and realistic protein structures. While existing models predominantly focus on unconditional generation or fine-grained conditioning at the residue level, the holistic, top-down approaches to control the overall topological arrangements are still insufficiently explored. In response, we introduce TopoDiff, a diffusion-based framework augmented by a global-structure encoding module, which is capable of unsupervisedly learning a compact latent representation of natural protein topologies with interpretable characteristics and simultaneously harnessing this learned information for controllable protein structure generation. We also propose a novel metric specifically designed to assess the coverage of sampled proteins with respect to the natural protein space. In comparative analyses with existing models, our generative model not only demonstrates comparable performance on established metrics but also exhibits better coverage across the recognized topology landscape. In summary, TopoDiff emerges as a novel solution towards enhancing the controllability and comprehensiveness of $\textit{de novo}$ protein structure generation, presenting new possibilities for innovative applications in protein engineering and beyond. | TopoDiff: Improving Protein Backbone Generation with Topology-aware Latent Encoding | [
"Yuyang Zhang",
"Zinnia Ma",
"Haipeng Gong"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=46okCAggF5 | @inproceedings{
li2023codonbert,
title={Codon{BERT}: Large Language Models for m{RNA} design and optimization},
author={Sizhen Li and Saeed Moayedpour and Ruijiang Li and Michael Bailey and Saleh Riahi and Milad Miladi and Jacob Miner and Dinghai Zheng and Jun Wang and Akshay Balsubramani and Khang Tran and Minnie and Monica Wu and Xiaobo Gu and Ryan Clinton and Carla Asquith and Joseph Skaleski and Lianne Boeglin and Sudha Chivukula and Anusha Dias and Fernando Ulloa Montoya and Vikram Agarwal and Ziv Bar-Joseph and Sven Jager},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=46okCAggF5}
} | mRNA based vaccines and therapeutics are gaining popularity and usage across a wide range of conditions. One of the critical issues when designing such mRNAs is sequence optimization. Even small proteins or peptides can be encoded by an enormously large number of mRNAs. The actual mRNA sequence can have a large impact on several properties including expression, stability, immunogenicity, and more. To enable the selection of an optimal sequence, we developed CodonBERT, a large language model (LLM) for mRNAs. Unlike prior models, CodonBERT uses codons as inputs which enables it to learn better representations. CodonBERT was trained using more than 10 million mRNA sequences from a diverse set of organisms. The resulting model captures important biological concepts. CodonBERT can also be extended to perform prediction tasks for various mRNA properties. CodonBERT outperforms previous mRNA prediction methods including on a new flu vaccine dataset. | CodonBERT: Large Language Models for mRNA design and optimization | [
"Sizhen Li",
"Saeed Moayedpour",
"Ruijiang Li",
"Michael Bailey",
"Saleh Riahi",
"Milad Miladi",
"Jacob Miner",
"Dinghai Zheng",
"Jun Wang",
"Akshay Balsubramani",
"Khang Tran",
"Minnie",
"Monica Wu",
"Xiaobo Gu",
"Ryan Clinton",
"Carla Asquith",
"Joseph Skaleski",
"Lianne Boeglin",
"Sudha Chivukula",
"Anusha Dias",
"Fernando Ulloa Montoya",
"Vikram Agarwal",
"Ziv Bar-Joseph",
"Sven Jager"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=3mcBKvkwWg | @inproceedings{
reidenbach2023coarsenconf,
title={CoarsenConf: Equivariant Coarsening with Aggregated Attention for Molecular Conformer Generation},
author={Danny Reidenbach and Aditi Krishnapriyan},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=3mcBKvkwWg}
} | Molecular conformer generation (MCG) is an important task in cheminformatics and drug discovery. The ability to efficiently generate low-energy 3D structures can avoid expensive quantum mechanical simulations, leading to accelerated virtual screenings and enhanced structural exploration. Several generative models have been developed for MCG, but many struggle to consistently produce high-quality conformers. To address these issues, we introduce CoarsenConf, which coarse-grains molecular graphs based on torsional angles and integrates them into an SE(3)-equivariant hierarchical variational autoencoder. Through equivariant coarse-graining, we aggregate the fine-grained atomic coordinates of subgraphs connected via rotatable bonds, creating a variable-length coarse-grained latent representation. Our model uses a novel aggregated attention mechanism to restore fine-grained coordinates from the coarse-grained latent representation, enabling efficient generation of accurate conformers. Furthermore, we evaluate the chemical and biochemical quality of our generated conformers on multiple downstream applications, including property prediction and oracle-based protein docking. Overall, CoarsenConf generates more accurate conformer ensembles compared to prior generative models. | CoarsenConf: Equivariant Coarsening with Aggregated Attention for Molecular Conformer Generation | [
"Danny Reidenbach",
"Aditi Krishnapriyan"
] | Workshop/GenBio | 2306.14852 | [
"https://github.com/ask-berkeley/coarsenconf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=2JkSb52D1n | @inproceedings{
izdebski2023de,
title={De Novo Drug Design with Joint Transformers},
author={Adam Izdebski and Ewelina Weglarz-Tomczak and Ewa Szczurek and Jakub Tomczak},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=2JkSb52D1n}
} | De novo drug design requires simultaneously generating novel molecules outside of training data and predicting their target properties, making it a hard task for generative models. To address this, we propose Joint Transformer that combines a Transformer decoder, Transformer encoder, and a predictor in a joint generative model with shared weights. We show that training the model with a penalized log-likelihood objective results in state-of-the-art performance in molecule generation, while decreasing the prediction error on newly sampled molecules, as compared to a fine-tuned decoder-only Transformer, by 42%. Finally, we propose a probabilistic black-box optimization algorithm that employs Joint Transformer to generate novel molecules with improved target properties and outperform other SMILES-based optimization methods in de novo drug design. | De Novo Drug Design with Joint Transformers | [
"Adam Izdebski",
"Ewelina Weglarz-Tomczak",
"Ewa Szczurek",
"Jakub Tomczak"
] | Workshop/GenBio | 2310.02066 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=1wa9JEanV5 | @inproceedings{
lau2023dgfn,
title={{DGFN}: Double Generative Flow Networks},
author={Elaine Lau and Nikhil Murali Vemgal and Doina Precup and Emmanuel Bengio},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=1wa9JEanV5}
} | Deep learning is emerging as an effective tool in drug discovery, with potential applications in both predictive and generative models. Generative Flow Networks (GFlowNets/GFNs) are a recently introduced method recognized for the ability to generate diverse candidates, in particular in small molecule generation tasks. In this work, we introduce double GFlowNets (DGFNs). Drawing inspiration from reinforcement learning and Double Deep Q-Learning, we introduce a target network used to sample trajectories, while updating the main network with these sampled trajectories. Empirical results confirm that DGFNs effectively enhance exploration in sparse reward domains and high-dimensional state spaces, both challenging aspects of de-novo design in drug discovery. | DGFN: Double Generative Flow Networks | [
"Elaine Lau",
"Nikhil Murali Vemgal",
"Doina Precup",
"Emmanuel Bengio"
] | Workshop/GenBio | 2310.19685 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=145TM9VQhx | @inproceedings{
chen2023ampdiffusion,
title={{AMP}-Diffusion: Integrating Latent Diffusion with Protein Language Models for Antimicrobial Peptide Generation},
author={Tianlai Chen and Pranay Vure and Rishab Pulugurta and Pranam Chatterjee},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=145TM9VQhx}
} | Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a potent class of generative models, demonstrating exemplary performance across diverse AI domains such as computer vision and natural language processing. In the realm of protein design, while there have been advances in structure-based, graph-based, and discrete sequence-based diffusion, the exploration of continuous latent space diffusion within protein language models (pLMs) remains nascent. In this work, we introduce AMP-Diffusion, a latent space diffusion model tailored for antimicrobial peptide (AMP) design, harnessing the capabilities of the state-of-the-art pLM, ESM-2, to de novo generate functional AMPs for downstream experimental application. Our evaluations reveal that peptides generated by AMP-Diffusion align closely in both pseudo-perplexity and amino acid diversity when benchmarked against experimentally-validated AMPs, and further exhibit relevant physicochemical properties similar to these naturally-occurring sequences. Overall, these findings underscore the biological plausibility of our generated sequences and pave the way for their empirical validation. In total, our framework motivates future exploration of pLM-based diffusion models for peptide and protein design. | AMP-Diffusion: Integrating Latent Diffusion with Protein Language Models for Antimicrobial Peptide Generation | [
"Tianlai Chen",
"Pranay Vure",
"Rishab Pulugurta",
"Pranam Chatterjee"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0gl0SJtd2E | @inproceedings{
hey2023identifying,
title={Identifying Neglected Hypotheses in Neurodegenerative Disease with Large Language Models},
author={Spencer Hey and Darren Angle and Christopher Chatham},
booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop},
year={2023},
url={https://openreview.net/forum?id=0gl0SJtd2E}
} | Neurodegenerative diseases remain a medical challenge, with existing treatments for many such diseases yielding limited benefits. Yet, research into diseases like Alzheimer's often focuses on a narrow set of hypotheses, potentially overlooking promising research avenues. We devised a workflow to curate scientific publications, extract central hypotheses using gpt3.5-turbo, convert these hypotheses into high-dimensional vectors, and cluster them hierarchically. Employing a secondary agglomerative clustering on the "noise" subset, followed by GPT-4 analysis, we identified signals of neglected hypotheses. This methodology unveiled several notable neglected hypotheses including treatment with coenzyme Q10, CPAP treatment to slow cognitive decline, and lithium treatment in Alzheimer's. We believe this methodology offers a novel and scalable approach to identifying overlooked hypotheses and broadening the neurodegenerative disease research landscape. | Identifying Neglected Hypotheses in Neurodegenerative Disease with Large Language Models | [
"Spencer Hey",
"Darren Angle",
"Christopher Chatham"
] | Workshop/GenBio | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=y4hgiutGdr | @inproceedings{
agrawal2023robustness,
title={Robustness to Multi-Modal Environment Uncertainty in {MARL} using Curriculum Learning},
author={Aakriti Agrawal and Rohith Aralikatti and Yanchao Sun and Furong Huang},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=y4hgiutGdr}
} | Multi-agent reinforcement learning (MARL) plays a pivotal role in tackling real-world challenges. However, the seamless transition of trained policies from simulations to real-world requires it to be robust to various environmental uncertainties. Existing works focus on finding Nash Equilibrium or the optimal policy under uncertainty in a single environment variable (i.e. action, state or reward). This is because a multi-agent system is highly complex and non-stationary. However, in a real-world setting, uncertainty can occur in multiple environment variables simultaneously. This work is the first to formulate the generalised problem of robustness to multi-modal environment uncertainty in MARL. To this end, we propose a general robust training approach for multi-modal uncertainty based on curriculum learning techniques. We handle environmental uncertainty in more than one variable simultaneously and present extensive results across both cooperative and competitive MARL environments, demonstrating that our approach achieves state-of-the-art robustness on three multi-particle environment tasks (Cooperative-Navigation, Keep-Away, Physical Deception). | Robustness to Multi-Modal Environment Uncertainty in MARL using Curriculum Learning | [
"Aakriti Agrawal",
"Rohith Aralikatti",
"Yanchao Sun",
"Furong Huang"
] | Workshop/MASEC | 2310.08746 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=tF464LogjS | @inproceedings{
foxabbott2023defining,
title={Defining and Mitigating Collusion in Multi-Agent Systems},
author={Jack Foxabbott and Sam Deverett and Kaspar Senft and Samuel Dower and Lewis Hammond},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=tF464LogjS}
} | Collusion between learning agents is increasingly becoming a topic of concern with the advent of more powerful, complex multi-agent systems. In contrast to existing work in narrow settings, we present a general formalisation of collusion between learning agents in partially-observable stochastic games. We discuss methods for intervening on a game to mitigate collusion and provide theoretical as well as empirical results demonstrating the effectiveness of three such interventions. | Defining and Mitigating Collusion in Multi-Agent Systems | [
"Jack Foxabbott",
"Sam Deverett",
"Kaspar Senft",
"Samuel Dower",
"Lewis Hammond"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=eL61LbI4uv | @inproceedings{
surve2023multiagent,
title={Multiagent Simulators for Social Networks},
author={Aditya Surve and Archit Rathod and Mokshit Surana and Gautam Malpani and Aneesh Shamraj and Sainath Reddy Sankepally and Raghav Jain and Swapneel S Mehta},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=eL61LbI4uv}
} | Multiagent social network simulations are an avenue that can bridge the communication gap between the public and private platforms in order to develop solutions to a complex array of issues relating to online safety.
While there are significant challenges relating to the scale of multiagent simulations, efficient learning from observational and interventional data to accurately model micro and macro-level emergent effects, there are equally promising opportunities not least with the advent of large language models that provide an expressive approximation of user behavior.
In this position paper, we review prior art relating to social network simulation, highlighting challenges and opportunities for future work exploring multiagent security using agent-based models of social networks. | Multiagent Simulators for Social Networks | [
"Aditya Surve",
"Archit Rathod",
"Mokshit Surana",
"Gautam Malpani",
"Aneesh Shamraj",
"Sainath Reddy Sankepally",
"Raghav Jain",
"Swapneel S Mehta"
] | Workshop/MASEC | 2311.14712 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=cYuE7uV4ut | @inproceedings{
gerstgrasser2023oracles,
title={Oracles \& Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning},
author={Matthias Gerstgrasser and David C. Parkes},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=cYuE7uV4ut}
} | Stackelberg equilibria arise naturally in a range of popular learning problems, such as in security games or indirect mechanism design, and have received increasing attention in the reinforcement learning literature. We present a general framework for implementing Stackelberg equilibria search as a multi-agent RL problem, allowing a wide range of algorithmic design choices. We discuss how previous approaches can be seen as specific instantiations of this framework. As a key insight, we note that the design space allows for approaches not previously seen in the literature, for instance by leveraging multitask and meta-RL techniques for follower convergence. We propose one such approach using contextual policies, and evaluate it experimentally on both standard and novel benchmark domains, showing greatly improved sample efficiency compared to previous approaches. Finally, we explore the effect of adopting algorithm designs outside the borders of our framework. | Oracles Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning | [
"Matthias Gerstgrasser",
"David C. Parkes"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=XjHF5LWbNS | @inproceedings{
chen2023dynamics,
title={Dynamics Model Based Adversarial Training For Competitive Reinforcement Learning},
author={Xuan Chen and Guanhong Tao and Xiangyu Zhang},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=XjHF5LWbNS}
} | Adversarial perturbations substantially degrade the performance of Deep Reinforcement Learning (DRL) agents, reducing the applicability of DRL in practice. Existing adversarial training for robustifying DRL uses the information of agent at the current step to minimize the loss upper bound introduced by adversarial input perturbations. It however only works well for single-agent tasks. The enhanced controversy in two-agent games introduces more dynamics and makes existing methods less effective. Inspired by model-based RL that builds a model for the environment transition probability, we propose a dynamics model based adversarial training framework for modeling multi-step state transitions. Our dynamics model transitively predicts future states, which can provide more precise back-propagated future information during adversarial perturbation generation, and hence improve the agent's empirical robustness substantially under different attacks. Our experiments on four two-agent competitive MuJoCo games show that our method consistently outperforms state-of-the-art adversarial training techniques in terms of empirical robustness and normal functionalities of DRL agents. | Dynamics Model Based Adversarial Training For Competitive Reinforcement Learning | [
"Xuan Chen",
"Guanhong Tao",
"Xiangyu Zhang"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=X8mSMsNbff | @inproceedings{
liu2023beyond,
title={Beyond Worst-case Attacks: Robust {RL} with Adaptive Defense via Non-dominated Policies},
author={Xiangyu Liu and Chenghao Deng and Yanchao Sun and Yongyuan Liang and Furong Huang},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=X8mSMsNbff}
} | Considerable focus has been directed towards ensuring that reinforcement learning (RL) policies are robust to adversarial attacks during test time. While current approaches are effective against strong attacks for potential worst-case scenarios, these methods often compromise performance in the absence of attacks or the presence of only weak attacks.
To address this, we study policy robustness under the well-accepted state-adversarial attack model, extending our focus beyond merely worst-case attacks. We \textit{refine} the baseline policy class $\Pi$ prior to test time, aiming for efficient adaptation within a compact, finite policy class $\tilde{\Pi}$, which can resort to an adversarial bandit subroutine. We then propose a novel training-time algorithm to iteratively discover \textit{non-dominated policies}, forming a near-optimal and minimal $\tilde{\Pi}$. Empirical validation on the Mujoco corroborates the superiority of our approach in terms of natural and robust performance, as well as adaptability to various attack scenarios. | Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies | [
"Xiangyu Liu",
"Chenghao Deng",
"Yanchao Sun",
"Yongyuan Liang",
"Furong Huang"
] | Workshop/MASEC | 2402.12673 | [
"https://github.com/umd-huang-lab/protected"
] | https://huggingface.co/papers/2402.12673 | 2 | 0 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 | poster |
null | https://openreview.net/forum?id=QWXwhHQHLv | @inproceedings{
milec2023generation,
title={Generation of Games for Opponent Model Differentiation},
author={David Milec and Viliam Lis{\'y} and Christopher Kiekintveld},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=QWXwhHQHLv}
} | Protecting against adversarial attacks is a common multiagent problem in the real world. Attackers in the real world are predominantly human actors, and the protection methods often incorporate opponent models to improve the performance when facing humans. Previous results show that modeling human behavior can significantly improve the performance of the algorithms. However, modeling humans correctly is a complex problem, and the models are often simplified and assume humans make mistakes according to some distribution or train parameters for the whole population from which they sample. In this work, we use data gathered by psychologists who identified personality types that increase the likelihood of performing malicious acts. However, in the previous work, the tests on a handmade game could not show strategic differences between the models. We created a novel model that links its parameters to psychological traits. We optimized over parametrized games and created games in which the differences are profound. Our work can help with automatic game generation when we need a game in which some models will behave differently and to identify situations in which the models do not align. | Generation of Games for Opponent Model Differentiation | [
"David Milec",
"Viliam Lisý",
"Christopher Kiekintveld"
] | Workshop/MASEC | 2311.16781 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=KOZwk7BFc3 | @inproceedings{
yang2023language,
title={Language Agents as Hackers: Evaluating Cybersecurity Skills with Capture the Flag},
author={John Yang and Akshara Prabhakar and Shunyu Yao and Kexin Pei and Karthik R Narasimhan},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=KOZwk7BFc3}
} | Amidst the advent of language models (LMs) and their wide-ranging capabilities, concerns have been raised about their implications with regards to privacy and security. In particular, the emergence of language agents as a promising aid for automating and augmenting digital work poses immediate questions concerning their misuse as malicious cybersecurity actors. With their exceptional compute efficiency and execution speed relative to human counterparts, language agents may be extremely adept at locating vulnerabilities, performing complex social engineering, and hacking real world systems. Understanding and guiding the development of language agents in the cybersecurity space requires a grounded understanding of their capabilities founded on empirical data and demonstrations. To address this need, we introduce InterCode-CTF, a novel task environment and benchmark for evaluating language agents on the Capture the Flag (CTF) task. Built as a facsimile of real world CTF competitions, in the InterCode-CTF environment, a language agent is tasked with finding a flag from a purposely-vulnerable computer program. We manually collect and verify a benchmark of 100 task instances that require a number of cybersecurity skills such as reverse engineering, forensics, and binary exploitation, then evaluate current top-notch LMs on this evaluation set. Our preliminary findings indicate that while language agents possess rudimentary cybersecurity knowledge, they are not able to perform multi-step cybersecurity tasks out-of-the-box. | Language Agents as Hackers: Evaluating Cybersecurity Skills with Capture the Flag | [
"John Yang",
"Akshara Prabhakar",
"Shunyu Yao",
"Kexin Pei",
"Karthik R Narasimhan"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
||
null | https://openreview.net/forum?id=HPmhaOTseN | @inproceedings{
terekhov2023secondorder,
title={Second-order Jailbreaks: Generative Agents Successfully Manipulate Through an Intermediary},
author={Mikhail Terekhov and Romain Graux and Eduardo Neville and Denis Rosset and Gabin Kolly},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=HPmhaOTseN}
} | As the capabilities of Large Language Models (LLMs) continue to expand, their application in communication tasks is becoming increasingly prevalent. However, this widespread use brings with it novel risks, including the susceptibility of LLMs to "jailbreaking" techniques. In this paper, we explore the potential for such risks in two- and three-agent communication networks, where one agent is tasked with protecting a password while another attempts to uncover it. Our findings reveal that an attacker, powered by advanced LLMs, can extract the password even through an intermediary that is instructed to prevent this. Our contributions include an experimental setup for evaluating the persuasiveness of LLMs, a demonstration of LLMs' ability to manipulate each other into revealing protected information, and a comprehensive analysis of this manipulative behavior. Our results underscore the need for further investigation into the safety and security of LLMs in communication networks. | Second-order Jailbreaks: Generative Agents Successfully Manipulate Through an Intermediary | [
"Mikhail Terekhov",
"Romain Graux",
"Eduardo Neville",
"Denis Rosset",
"Gabin Kolly"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FuzJ9abiJb | @inproceedings{
guo2023rave,
title={{RAVE}: Enabling safety verification for realistic deep reinforcement learning systems},
author={Wenbo Guo and Taesung Lee and Kevin Eykholt and Jiyong Jiang},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=FuzJ9abiJb}
} | Recent advancements in reinforcement learning (RL) expedited its success across a wide range of decision-making problems. However, a lack of safety guarantees restricts its use in critical tasks. While recent work has proposed several verification techniques to provide such guarantees, they require that the state-transition function be known and the reinforcement learning policy be deterministic. Both of these properties may not be true in real environments, which significantly limits the use of existing verification techniques. In this work, we propose two approximation strategies that address the limitation of prior work allowing the safety verification of RL policies. We demonstrate that by augmenting state-of-the-art verification techniques with our proposed approximation strategies, we can guarantee the safety of non-deterministic RL policies operating in environments with unknown state-transition functions. We theoretically prove that our technique guarantees the safety of an RL policy at runtime. Our experiments on three representative RL tasks empirically verify the efficacy of our method in providing a safety guarantee to a target agent while maintaining its task execution performance. | RAVE: Enabling safety verification for realistic deep reinforcement learning systems | [
"Wenbo Guo",
"Taesung Lee",
"Kevin Eykholt",
"Jiyong Jiang"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=FXZFrOvIoc | @inproceedings{
motwani2023a,
title={A Perfect Collusion Benchmark: How can {AI} agents be prevented from colluding with information-theoretic undetectability?},
author={Sumeet Ramesh Motwani and Mikhail Baranchuk and Lewis Hammond and Christian Schroeder de Witt},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=FXZFrOvIoc}
} | Secret collusion among advanced AI agents is widely considered a significant risk to AI safety. In this paper, we investigate whether LLM agents can learn to collude undetectably through hiding secret messages in their overt communications. To this end, we implement a variant of Simmon's prisoner problem using LLM agents and turn it into a stegosystem by leveraging recent advances in perfectly secure steganography. We suggest that our resulting benchmark environment can be used to investigate how easily LLM agents can learn to use perfectly secure steganography tools, and how secret collusion between agents can be countered pre-emptively through paraphrasing attacks on communication channels. Our work yields unprecedented empirical insight into the question of whether advanced AI agents may be able to collude unnoticed. | A Perfect Collusion Benchmark: How can AI agents be prevented from colluding with information-theoretic undetectability? | [
"Sumeet Ramesh Motwani",
"Mikhail Baranchuk",
"Lewis Hammond",
"Christian Schroeder de Witt"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=CZcIYfiGlL | @inproceedings{
sun2023cooperative,
title={Cooperative {AI} via Decentralized Commitment Devices},
author={Xinyuan Sun and Davide Crapis and Matt Stephenson and Jonathan Passerat-Palmbach},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=CZcIYfiGlL}
} | Credible commitment devices have been a popular approach for robust multi-agent coordination. However, existing commitment mechanisms face limitations like privacy, integrity, and susceptibility to mediator or user strategic behavior. It is unclear if the cooperative AI techniques we study are robust to real-world incentives and attack vectors. Fortunately, decentralized commitment devices that utilize cryptography have been deployed in the wild, and numerous studies have shown their ability to coordinate algorithmic agents, especially when agents face rational or sometimes adversarial opponents with significant economic incentives, currently in the order of several million to billions of dollars. In this paper, we illustrate potential security issues in cooperative AI via examples in the decentralization literature and, in particular, Maximal Extractable Value (MEV). We call for expanded research into decentralized commitments to advance cooperative AI capabilities for secure coordination in open environments and empirical testing frameworks to evaluate multi-agent coordination ability given real-world commitment constraints. | Cooperative AI via Decentralized Commitment Devices | [
"Xinyuan Sun",
"Davide Crapis",
"Matt Stephenson",
"Jonathan Passerat-Palmbach"
] | Workshop/MASEC | 2311.07815 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=5U8PGlJt2S | @inproceedings{
sun2023robust,
title={Robust Q-Learning against State Perturbations: a Belief-Enriched Pessimistic Approach},
author={Xiaolin Sun and Zizhan Zheng},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=5U8PGlJt2S}
} | Reinforcement learning (RL) has achieved phenomenal success in various domains. However, its data-driven nature also introduces new vulnerabilities that can be exploited by malicious opponents. Recent work shows that a well-trained RL agent can be easily manipulated by strategically perturbing its state observations at the test stage. Existing solutions either introduce a regularization term to improve the smoothness of the trained policy against perturbations or alternatively train the agent's policy and the attacker's policy. However, the former does not provide sufficient protection against strong attacks, while the latter is computationally prohibitive for large environments. In this work, we propose a new robust RL algorithm for deriving a pessimistic policy to safeguard against an agent's uncertainty about true states. This approach is further enhanced with belief state inference and diffusion-based state purification to reduce uncertainty. Empirical results show that our approach obtains superb performance under strong attacks and has a comparable training overhead with regularization-based methods. | Robust Q-Learning against State Perturbations: a Belief-Enriched Pessimistic Approach | [
"Xiaolin Sun",
"Zizhan Zheng"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=5HuBX8LvuT | @inproceedings{
mukobi2023assessing,
title={Assessing Risks of Using Autonomous Language Models in Military and Diplomatic Planning},
author={Gabriel Mukobi and Ann-Katrin Reuel and Juan-Pablo Rivera and Chandler Smith},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=5HuBX8LvuT}
} | The potential integration of autonomous agents in high-stakes military and foreign-policy decision-making has gained prominence, especially with the emergence of advanced generative AI models like GPT-4. This paper aims to scrutinize the behavior of multiple autonomous agents in simulated military and diplomacy scenarios, specifically focusing on their potential to escalate conflicts. Drawing on established international relations frameworks, we assessed the escalation potential of decisions made by these agents in different scenarios. Contrary to prior qualitative studies, our research provides both qualitative and quantitative insights. We find that there are significant differences in the models' predilections to escalate, with Claude 2 being the least aggressive and GPT-4-Base the most aggressive models. Our findings indicate that, even in seemingly neutral contexts, language-model-based autonomous agents occasionally opt for aggressive or provocative actions. This tendency intensifies in scenarios with predefined trigger events. Importantly, the patterns behind such escalatory behavior remain largely unpredictable. Furthermore, a qualitative analysis of the models' verbalized reasoning, particularly in the GPT-4-Base model, reveals concerning justifications. Given the high stakes involved in military and foreign-policy contexts, the deployment of such autonomous agents demands further examination and cautious consideration. | Assessing Risks of Using Autonomous Language Models in Military and Diplomatic Planning | [
"Gabriel Mukobi",
"Ann-Katrin Reuel",
"Juan-Pablo Rivera",
"Chandler Smith"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4RFv40DWkp | @inproceedings{
harris2023stackelberg,
title={Stackelberg Games with Side Information},
author={Keegan Harris and Steven Wu and Maria Florina Balcan},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=4RFv40DWkp}
} | We study an online learning setting in which a leader interacts with a sequence of followers over the course of $T$ rounds. At each round, the leader commits to a mixed strategy over actions, after which the follower best-responds. Such settings are referred to in the literature as Stackelberg games. Stackelberg games have received much interest from the community, in part due to their applicability to real-world security settings such as wildlife preservation and airport security. However despite this recent interest, current models of Stackelberg games fail to take into consideration the fact that the players' optimal strategies often depend on external factors such as weather patterns, airport traffic, etc. We address this gap by allowing for player payoffs to depend on an external context, in addition to the actions taken by each player. We formalize this setting as a repeated Stackelberg game with side information and show that under this setting, it is impossible to achieve sublinear regret if both the sequence of contexts and the sequence of followers is chosen adversarially. Motivated by this impossibility result, we consider two natural relaxations: (1) stochastically chosen contexts with adversarially chosen followers and (2) stochastically chosen followers with adversarially chosen contexts. In each of these settings, we provide algorithms which obtain $\tilde{\mathcal{O}}(\sqrt{T})$ regret. | Stackelberg Games with Side Information | [
"Keegan Harris",
"Steven Wu",
"Maria Florina Balcan"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=4831vtb6Bp | @inproceedings{
ganzfried2023safe,
title={Safe Equilibrium},
author={Sam Ganzfried},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=4831vtb6Bp}
} | The standard game-theoretic solution concept, Nash equilibrium, assumes that all players behave rationally. If we follow a Nash equilibrium and opponents are irrational (or follow strategies from a different Nash equilibrium), then we may obtain an extremely low payoff. On the other hand, a maximin strategy assumes that all opposing agents are playing to minimize our payoff (even if it is not in their best interest), and ensures the maximal possible worst-case payoff, but results in exceedingly conservative play. We propose a new solution concept called safe equilibrium that models opponents as behaving rationally with a specified probability and behaving potentially arbitrarily with the remaining probability. We prove that a safe equilibrium exists in all strategic-form games (for all possible values of the rationality parameters), and prove that its computation is PPAD-hard. | Safe Equilibrium | [
"Sam Ganzfried"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=3b8hfpqtlM | @inproceedings{
souly2023leading,
title={Leading the Pack: N-player Opponent Shaping},
author={Alexandra Souly and Timon Willi and Akbir Khan and Robert Kirk and Chris Lu and Edward Grefenstette and Tim Rockt{\"a}schel},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=3b8hfpqtlM}
} | Reinforcement learning solutions have great success in the 2-player general sum setting. In this setting, the paradigm of Opponent Shaping (OS), in which agents account for the learning of their co-players, has led to agents which are able to avoid collectively bad outcomes, whilst also maximizing their reward. These methods have currently been limited to 2-player game. However, the real world involves interactions with many more agents, with interactions on both local and global scales. In this paper, we extend Opponent Shaping (OS) methods to environments involving multiple co-players and multiple shaping agents. We evaluate on 4 different environments, varying the number of players from 3 to 5, and demonstrate that model-based OS methods converge to equilibrium with better global welfare than naive learning. However, we find that when playing with a large number of co-players, OS methods' relative performance reduces, suggesting that in the limit OS methods may not perform well. Finally, we explore scenarios where more than one OS method is present, noticing that within games requiring a majority of cooperating agents, OS methods converge to outcomes with poor global welfare. | Leading the Pack: N-player Opponent Shaping | [
"Alexandra Souly",
"Timon Willi",
"Akbir Khan",
"Robert Kirk",
"Chris Lu",
"Edward Grefenstette",
"Tim Rocktäschel"
] | Workshop/MASEC | 2312.12564 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |
|
null | https://openreview.net/forum?id=1Zb8JjrgSK | @inproceedings{
shi2023harnessing,
title={Harnessing the Power of Federated Learning in Federated Contextual Bandits},
author={Chengshuai Shi and Kun Yang and Ruida Zhou and Cong Shen},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=1Zb8JjrgSK}
} | Federated contextual bandits (FCB), as a pivotal instance of combining federated learning (FL) and sequential decision-making, have received growing interest in recent years. However, existing FCB designs often adopt FL protocols tailored for specific settings, deviating from the canonical FL framework (e.g., the celebrated FedAvg design). Such disconnections not only prohibit these designs from flexibly leveraging canonical FL algorithmic approaches but also set considerable barriers for FCB to incorporate growing studies on FL attributes such as robustness and privacy. To promote a closer relationship between FL and FCB, we propose a novel FCB design, FedIGW, which can flexibly incorporate both existing and future FL protocols and thus is capable of harnessing the full spectrum of FL advances. | Harnessing the Power of Federated Learning in Federated Contextual Bandits | [
"Chengshuai Shi",
"Kun Yang",
"Ruida Zhou",
"Cong Shen"
] | Workshop/MASEC | 2312.16341 | [
"https://github.com/shengroup/fedigw"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
|
null | https://openreview.net/forum?id=15wSm5uiSE | @inproceedings{
chopra2023decentralized,
title={Decentralized agent-based modeling},
author={Ayush Chopra and Arnau Quera-Bofarull and Nurullah Giray Kuru and Ramesh Raskar},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=15wSm5uiSE}
} | The utility of agent-based models for practical decision making depends upon their ability to recreate populations with great detail and integrate real-world data streams. However, incorporating this data can be challenging due to privacy concerns. We alleviate this issue by introducing a paradigm for secure agent-based modeling. In particular, we leverage secure multi-party computation to enable decentralized agent-based simulation, calibration, and analysis. We believe this is a critical step towards making agent-based models scalable to the real-world application. | Decentralized agent-based modeling | [
"Ayush Chopra",
"Arnau Quera-Bofarull",
"Nurullah Giray Kuru",
"Ramesh Raskar"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | poster |
||
null | https://openreview.net/forum?id=0O5vbRAWol | @inproceedings{
ankile2023i,
title={I See You! Robust Measurement of Adversarial Behavior},
author={Lars Ankile and Matheus X.V. Ferreira and David Parkes},
booktitle={Multi-Agent Security Workshop @ NeurIPS'23},
year={2023},
url={https://openreview.net/forum?id=0O5vbRAWol}
} | We introduce the study of non-manipulable measures of manipulative behavior in multi-agent systems. We do this through a case study of decentralized finance (DeFi) and blockchain systems, which are salient as real-world, rapidly emerging multi-agent systems with financial incentives for malicious behavior, for the participation in algorithmic and AI systems, and for the need for new methods with which to measure levels of manipulative behavior. We introduce a new surveillance metric for measuring malicious behavior and demonstrate its effectiveness in a natural experiment to the Uniswap DeFi ecosystem. | I See You! Robust Measurement of Adversarial Behavior | [
"Lars Ankile",
"Matheus X.V. Ferreira",
"David Parkes"
] | Workshop/MASEC | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | oral |