Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks? Paper • 2404.03411 • Published Apr 4 • 8
Teams of LLM Agents can Exploit Zero-Day Vulnerabilities Paper • 2406.01637 • Published 27 days ago • 1