WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs Paper • 2406.18495 • Published 9 days ago • 12
WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models Paper • 2406.18510 • Published 9 days ago • 7
AI2 Safety Toolkit Collection Safety data, moderation tools and safe LLMs. • 6 items • Updated 8 days ago • 1