Commit
·
71623b0
1
Parent(s):
bb0f400
Update README.md
Browse files
README.md
CHANGED
@@ -7,8 +7,6 @@ license: apache-2.0
|
|
7 |
|
8 |
[**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
|
9 |
|
10 |
-
[**Model**](https://huggingface.co/declare-lab/starling-7B)
|
11 |
-
|
12 |
We created **Starling** by fine-tuning Vicuna-7B on HarmfulQA, a ChatGPT-distilled dataset. More details are on our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
|
13 |
|
14 |
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.
|
|
|
7 |
|
8 |
[**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
|
9 |
|
|
|
|
|
10 |
We created **Starling** by fine-tuning Vicuna-7B on HarmfulQA, a ChatGPT-distilled dataset. More details are on our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
|
11 |
|
12 |
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.
|