Papers
arxiv:2406.15279

Cross-Modality Safety Alignment

Published on Jun 21
· Submitted by sinwang on Jun 26
Authors:
,
,
,
,
,
,

Abstract

As Artificial General Intelligence (AGI) becomes increasingly integrated into various facets of human life, ensuring the safety and ethical alignment of such systems is paramount. Previous studies primarily focus on single-modality threats, which may not suffice given the integrated and complex nature of cross-modality interactions. We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment. Specifically, it considers cases where single modalities are safe independently but could potentially lead to unsafe or unethical outputs when combined. To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations. Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, such as GPT-4V and LLaVA, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.

Community

Paper author Paper submitter
edited Jun 26

As AGI integrates deeper into our lives, ensuring its safety is critical. Previous studies primarily focus on single-modality threats, which may not suffice given the integrated and complex nature of cross-modality interactions. We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment. Specifically, it considers cases where single modalities are safe independently but could potentially lead to unsafe or unethical outputs when combined. We found that LVLMs struggle to identify SIUO-type safety issues and encounter difficulties in providing safe responses, with 13 out of 15 models performing below 50%. Even advanced LVLM like GPT-4V have a safe response rate of only 53.26% on our SIUO dataset.

Project Page: https://sinwang20.github.io/SIUO/
Code Repo: https://github.com/sinwang20/SIUO

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.15279 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.15279 in a Space README.md to link it from this page.

Collections including this paper 1