|
<!DOCTYPE html> |
|
<html lang="en-US"> |
|
<head> |
|
<meta charset="UTF-8"> |
|
|
|
|
|
<title>Gradient Cuff | Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by |
|
Exploring Refusal Loss Landscapes </title> |
|
<meta property="og:title" content="Gradient Cuff" /> |
|
<meta property="og:locale" content="en_US" /> |
|
<meta name="description" content="Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes" /> |
|
<meta property="og:description" content="Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes" /> |
|
<script type="application/ld+json"> |
|
{"@context":"https://schema.org","@type":"WebSite","description":"Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes","headline":"Gradient Cuff","name":"Gradient Cuff","url":"https://huggingface.co/spaces/gregH/Gradient Cuff"}</script> |
|
|
|
|
|
<link rel="preconnect" href="https://fonts.gstatic.com"> |
|
<link rel="preload" href="https://fonts.googleapis.com/css?family=Open+Sans:400,700&display=swap" as="style" type="text/css" crossorigin> |
|
<meta name="viewport" content="width=device-width, initial-scale=1"> |
|
<meta name="theme-color" content="#157878"> |
|
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent"> |
|
|
|
<link rel="stylesheet" href="assets/css/bootstrap/bootstrap.min.css?v=90447f115a006bc45b738d9592069468b20e2551"> |
|
<link rel="stylesheet" href="assets/css/style.css?v=90447f115a006bc45b738d9592069468b20e2551"> |
|
|
|
<link rel="stylesheet" href="assets/css/custom_style.css?v=90447f115a006bc45b738d9592069468b20e2551"> |
|
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script> |
|
<link rel="stylesheet" href="https://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/themes/smoothness/jquery-ui.css"> |
|
<script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js"></script> |
|
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.9.4/Chart.js"></script> |
|
<script src="assets/js/calibration.js?v=90447f115a006bc45b738d9592069468b20e2551"></script> |
|
|
|
|
|
|
|
|
|
|
|
|
|
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script> |
|
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script> |
|
|
|
|
|
|
|
|
|
</head> |
|
<body> |
|
<a id="skip-to-content" href="#content">Skip to the content.</a> |
|
|
|
<header class="page-header" role="banner"> |
|
<h1 class="project-name">Gradient Cuff</h1> |
|
<h2 class="project-tagline">Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes</h2> |
|
|
|
|
|
</header> |
|
|
|
<main id="content" class="main-content" role="main"> |
|
<h2 id="introduction">Introduction</h2> |
|
|
|
<p>Neural network calibration is an essential task in deep learning to ensure consistency |
|
between the confidence of model prediction and the true correctness likelihood. In this |
|
demonstration, we first visualize the idea of neural network calibration on a binary |
|
classifier and show model features that represent its calibration. Second, we introduce |
|
our proposed framework <strong>Neural Clamping</strong>, which employs a simple joint input-output |
|
transformation on a pre-trained classifier. We also provide other calibration approaches |
|
(e.g., temperature scaling) to compare with Neural Clamping.</p> |
|
|
|
<h2 id="what-is-jailbreak">What is Calibration?</h2> |
|
<p>Neural Network Calibration seeks to make model prediction align with its true correctness likelihood. |
|
A well-calibrated model should provide accurate predictions and reliable confidence when making inferences. On the |
|
contrary, a poor calibration model would have a wide gap between its accuracy and average confidence level. |
|
This phenomenon could hamper scenarios requiring accurate uncertainty estimation, such as safety-related tasks |
|
(e.g., autonomous driving systems, medical diagnosis, etc.).</p> |
|
|
|
<div class="container"> |
|
<div id="jailbreak-intro" class="row align-items-center jailbreak-intro-sec"> |
|
<img id="jailbreak-intro-img" src="https://hsiung.cc/NCTV/images/conf_acc_demo.gif" /> |
|
</div> |
|
</div> |
|
|
|
<h3 id="refusal-loss">Calibration Metrics</h3> |
|
<p>Objectively, researchers utilize <strong>Calibration Metrics</strong> to measure the calibration error for a model, for example, |
|
Expected Calibration Error (ECE), Static Calibration Error (SCE), Adaptive Calibration Error (ACE), etc.</p> |
|
|
|
<div class="container jailbreak-intro-sec"> |
|
<div><img id="jailbreak-intro-img" src="images/metrics/intro-metric-example.png" /></div> |
|
</div> |
|
|
|
<div id="refusal-loss-formula" class="container"> |
|
<div id="refusal-loss-formula-list" class="row align-items-center formula-list"> |
|
<a href="#ECE-formula" class="selected">Refusal Loss</a> |
|
<a href="#SCE-formula">Refusal Loss Approximation</a> |
|
<a href="#ACE-formula">Gradient Estimation</a> |
|
<div style="clear: both"></div> |
|
</div> |
|
<div id="refusal-loss-formula-content" class="row align-items-center"> |
|
<span id="ECE-formula" class="formula" style="">$$\displaystyle \phi_\theta(x)=1-\mathbb{E}_{y \sim T_\theta(x)} JB(y)$$</span> |
|
<span id="SCE-formula" class="formula" style="display: none;">$$\displaystyle f_\theta(x)=1-\frac{1}{N}\sum_{i=1}^N JB(y_i)$$</span> |
|
<span id="ACE-formula" class="formula" style="display: none;">$$\displaystyle g_\theta(x)=\sum_{i=1}^P \frac{f_\theta(x\oplus \mu u_i)-f_\theta(x)}{\mu} u_i $$</span> |
|
</div> |
|
</div> |
|
|
|
<h2 id="proposed-approach-gradient-cuff">Proposed Approach: Gradient Cuff</h2> |
|
|
|
<div class="container"><img id="gradient-cuff-header" src="images/header.png" /></div> |
|
|
|
<h2 id="demonstration">Demonstration</h2> |
|
<p>In the current research, a reliability diagram is drawn to show the calibration performance of a model. However, since |
|
reliability diagrams often only provide fixed bar graphs statically, further explanation from the chart is limited. In |
|
this demonstration, we show how to make reliability diagrams interactive and insightful to help researchers and |
|
developers gain more insights from the graph. Specifically, we provide three CIFAR-100 classification models |
|
in this demonstration. Multiple Bin numbers are also support</p> |
|
|
|
<p>We hope this tool could also facilitate the development process.</p> |
|
|
|
<div id="jailbreak-demo" class="container"> |
|
<div class="row align-items-center"> |
|
<div class="row" style="margin: 10px 0 0"> |
|
<div class="models-list"> |
|
<span style="margin-right: 1em;">Models</span> |
|
<span class="radio-group"><input type="radio" id="LLaMA2" class="options" name="models" value="llama2_7b_chat" checked="" /><label for="LLaMA2" class="option-label">LLaMA-2-7B-Chat</label></span> |
|
<span class="radio-group"><input type="radio" id="Vicuna" class="options" name="models" value="vicuna_7b_v1.5" /><label for="Vicuna" class="option-label">Vicuna-7B-V1.5</label></span> |
|
</div> |
|
</div> |
|
</div> |
|
<div class="row align-items-center"> |
|
<div class="col-4"> |
|
<div id="defense-methods"> |
|
<div class="row align-items-center"><input type="radio" id="defense_ppl" class="options" name="defense" value="ppl" /><label for="defense_ppl" class="defense">Perplexity Filter</label></div> |
|
<div class="row align-items-center"><input type="radio" id="defense_smoothllm" class="options" name="defense" value="smoothllm" /><label for="defense_smoothllm" class="defense">SmoothLLM</label></div> |
|
<div class="row align-items-center"><input type="radio" id="defense_erase_check" class="options" name="defense" value="erase_check" /><label for="defense_erase_check" class="defense">Erase-Check</label></div> |
|
<div class="row align-items-center"><input type="radio" id="defense_self_reminder" class="options" name="defense" value="self_reminder" /><label for="defense_self_reminder" class="defense">Self-Reminder</label></div> |
|
<div class="row align-items-center"><input type="radio" id="defense_gradient_cuff" class="options" name="defense" value="gradient_cuff" /><label for="defense_gradient_cuff" class="defense"><span style="font-weight: bold;">Gradient Cuff</span></label></div> |
|
</div> |
|
<div class="row align-items-center"> |
|
<div class="legend"><img src="images/demo-legend.png" alt="legend" /></div> |
|
</div> |
|
<div class="row align-items-center"> |
|
<div class="attack-success-rate"><span class="jailbreak-metric">Average Malicious Refusal Rate</span><span class="attack-success-rate-value" id="asr-value">0.10731</span></div> |
|
</div> |
|
<div class="row align-items-center"> |
|
<div class="benign-refusal-rate"><span class="jailbreak-metric">Benign Refusal Rate</span><span class="benign-refusal-rate-value" id="brr-value">0.10721</span></div> |
|
</div> |
|
</div> |
|
<div class="col-8"> |
|
<figure class="figure"> |
|
<img id="reliability-diagram" src="images/cifar100/resnet110/none/bin15.png" alt="CIFAR-100 Calibrated Reliability Diagram (Full)" /> |
|
<div class="slider-container"> |
|
<div class="slider-label"><span>Perplexity Threshold</span></div> |
|
<div class="slider-content" id="ppl-slider"><div id="ppl-threshold" class="ui-slider-handle"></div></div> |
|
</div> |
|
<div class="slider-container"> |
|
<div class="slider-label"><span>Gradient Threshold</span></div> |
|
<div class="slider-content" id="gradient-norm-slider"><div id="gradient-norm-threshold" class="slider-value ui-slider-handle"></div></div> |
|
</div> |
|
<figcaption class="figure-caption"> |
|
</figcaption> |
|
</figure> |
|
</div> |
|
</div> |
|
</div> |
|
|
|
<h2 id="citations">Citations</h2> |
|
<p>If you find Neural Clamping helpful and useful for your research, please cite our main paper as follows:</p> |
|
|
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@inproceedings{hsiung2023nctv, |
|
title={{NCTV: Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes}}, |
|
author={Lei Hsiung, Yung-Chen Tang and Pin-Yu Chen and Tsung-Yi Ho}, |
|
booktitle={Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence}, |
|
publisher={Association for the Advancement of Artificial Intelligence}, |
|
year={2023}, |
|
month={February} |
|
} |
|
|
|
@misc{tang2022neural_clamping, |
|
title={{Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration}}, |
|
author={Yung-Chen Tang and Pin-Yu Chen and Tsung-Yi Ho}, |
|
year={2022}, |
|
eprint={2209.11604}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG} |
|
} |
|
</code></pre></div></div> |
|
|
|
|
|
<footer class="site-footer"> |
|
|
|
<span class="site-footer-owner">NCTV is maintained by <a href="https://hsiung.cc">Lei Hsiung</a> and <a href="https://github.com/yungchentang">Yung-Chen Tang</a>.</span> |
|
|
|
</footer> |
|
</main> |
|
</body> |
|
</html> |
|
|