Update README.md
Browse files
README.md
CHANGED
@@ -8,11 +8,11 @@ datasets:
|
|
8 |
|
9 |
As a part of our research efforts to make LLMs safer, we created **Starling**. It is obtained by fine-tuning Vicuna-7B on [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA), a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are in our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://arxiv.org/abs/2308.09662)
|
10 |
|
11 |
-
<img src="https://declare-lab.
|
12 |
|
13 |
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.
|
14 |
|
15 |
-
<img src="https://declare-lab.
|
16 |
|
17 |
<h2>Experimental Results</h2>
|
18 |
|
@@ -20,7 +20,7 @@ Compared to Vicuna, **Avg. 5.2% reduction in Attack Success Rate** (ASR) on Dang
|
|
20 |
|
21 |
Compared to Vicuna, **Avg. 3-7% improvement in HHH score** measured on BBH-HHH benchmark.**
|
22 |
|
23 |
-
<img src="https://declare-lab.
|
24 |
|
25 |
TruthfulQA (MC2): **48.90 vs Vicuna's 47.00**
|
26 |
|
@@ -32,13 +32,13 @@ BBH (3-shot): **33.47 vs Vicuna's 33.05**
|
|
32 |
|
33 |
This jailbreak prompt (termed as Chain of Utterances (CoU) prompt in the paper) shows a 65% Attack Success Rate (ASR) on GPT-4 and 72% on ChatGPT.
|
34 |
|
35 |
-
<img src="https://declare-lab.
|
36 |
|
37 |
<h2>HarmfulQA Data Collection</h2>
|
38 |
|
39 |
We also release our **HarmfulQA** dataset with 1,960 harmful questions (converting 10 topics-10 subtopics) for red-teaming as well as conversations based on them used in model safety alignment, more details [**here**](https://huggingface.co/datasets/declare-lab/HarmfulQA). The following figure describes the data collection process.
|
40 |
|
41 |
-
<img src="https://declare-lab.
|
42 |
|
43 |
_Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps the model learn from the red (harmful) response data._
|
44 |
|
|
|
8 |
|
9 |
As a part of our research efforts to make LLMs safer, we created **Starling**. It is obtained by fine-tuning Vicuna-7B on [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA), a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are in our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://arxiv.org/abs/2308.09662)
|
10 |
|
11 |
+
<img src="https://declare-lab.github.io/assets/images/logos/starling-final.png" alt="Image" width="100" height="100">
|
12 |
|
13 |
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.
|
14 |
|
15 |
+
<img src="https://declare-lab.github.io/assets/images/logos/method.png" alt="Image" width="1000" height="335">
|
16 |
|
17 |
<h2>Experimental Results</h2>
|
18 |
|
|
|
20 |
|
21 |
Compared to Vicuna, **Avg. 3-7% improvement in HHH score** measured on BBH-HHH benchmark.**
|
22 |
|
23 |
+
<img src="https://declare-lab.github.io/assets/images/logos/starling-results.png" alt="Image" width="1000" height="335">
|
24 |
|
25 |
TruthfulQA (MC2): **48.90 vs Vicuna's 47.00**
|
26 |
|
|
|
32 |
|
33 |
This jailbreak prompt (termed as Chain of Utterances (CoU) prompt in the paper) shows a 65% Attack Success Rate (ASR) on GPT-4 and 72% on ChatGPT.
|
34 |
|
35 |
+
<img src="https://declare-lab.github.io/assets/images/logos/jailbreakprompt_main_paper.png" alt="Image" width="1000" height="1000">
|
36 |
|
37 |
<h2>HarmfulQA Data Collection</h2>
|
38 |
|
39 |
We also release our **HarmfulQA** dataset with 1,960 harmful questions (converting 10 topics-10 subtopics) for red-teaming as well as conversations based on them used in model safety alignment, more details [**here**](https://huggingface.co/datasets/declare-lab/HarmfulQA). The following figure describes the data collection process.
|
40 |
|
41 |
+
<img src="https://declare-lab.github.io/assets/images/logos/data_gen.png" alt="Image" width="1000" height="1000">
|
42 |
|
43 |
_Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps the model learn from the red (harmful) response data._
|
44 |
|