Commit
·
7e303b0
1
Parent(s):
71623b0
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ license: apache-2.0
|
|
7 |
|
8 |
[**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
|
9 |
|
10 |
-
We created **Starling** by fine-tuning Vicuna-7B on HarmfulQA, a ChatGPT-distilled dataset. More details are on our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
|
11 |
|
12 |
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.
|
13 |
|
|
|
7 |
|
8 |
[**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
|
9 |
|
10 |
+
We created **Starling** by fine-tuning Vicuna-7B on HarmfulQA, a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are on our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
|
11 |
|
12 |
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.
|
13 |
|