Update README.md
Browse files
README.md
CHANGED
@@ -1,31 +1,55 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
---
|
8 |
|
9 |
-
|
10 |
-
should probably proofread and complete it, then remove this comment. -->
|
11 |
|
12 |
-
#
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
### Training hyperparameters
|
31 |
|
@@ -44,10 +68,6 @@ The following hyperparameters were used during training:
|
|
44 |
- lr_scheduler_warmup_ratio: 0.1
|
45 |
- num_epochs: 6.0
|
46 |
|
47 |
-
### Training results
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
### Framework versions
|
52 |
|
53 |
- Transformers 4.34.1
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- hkust-nlp/deita-6k-v0
|
5 |
+
language:
|
6 |
+
- en
|
7 |
---
|
8 |
|
9 |
+
<img src="https://huggingface.co/datasets/hkust-nlp/deita-images/resolve/main/logo-final.png" alt="Deita banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
|
|
10 |
|
11 |
+
# Model Card for Deita 7B V1.0 SFT (6k)
|
12 |
|
13 |
+
Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs).
|
14 |
+
DeitaDeita 7B V1.0 SFT (6k) is a fine-tuned version of Mistral-7B-v0.1 that was trained on 6k automatically selected lightweight, high-quality alignment SFT data: [Deita 6K V0](https://huggingface.co/datasets/hkust-nlp/deita-6k-v0).
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
+
- **Model type:** Model fine tuned on automatically selected lightweight, high-quality alignment SFT data.
|
19 |
+
- **Language(s) (NLP):** Primarily English
|
20 |
+
- **Finetuned from model:** Mistral-7B-v0.1
|
21 |
+
|
22 |
+
### Model Sources
|
23 |
+
|
24 |
+
- **Repository:** https://github.com/hkust-nlp/deita
|
25 |
+
- **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4).
|
26 |
+
|
27 |
+
## Performance
|
28 |
+
| Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) |
|
29 |
+
|------------------------------------------------|-----------|------------|----------|---------------|----------------|
|
30 |
+
| **Proprietary Models** | | | | | |
|
31 |
+
| GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- |
|
32 |
+
| GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- |
|
33 |
+
| Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- |
|
34 |
+
| GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- |
|
35 |
+
| **Open-sourced Models based on Mistral-7B** | | | | | |
|
36 |
+
| Mistral-7B-Instruct-v0.1 | -- | -- | 6.84 | 69.65 | 60.45 |
|
37 |
+
| Zephyr-7B-sft | SFT | 200K SFT | 5.32 | 75.12 | 60.93 |
|
38 |
+
| $\text{Zephyr-7B-}\beta$ | SFT + DPO | 200K SFT + 60K DPO | 7.34 | 90.60 | 66.36 |
|
39 |
+
| OpenChat-3.5 | C-RLFT | >70K C-RLFT | 7.81 | 88.51 | -- |
|
40 |
+
| Starling-7B | C-RLFT + APA | >70K C-RLFT + 183K APA | 8.09 | 91.99 | -- |
|
41 |
+
| Random | SFT | 10K SFT | 5.89 | 56.90 | 61.72 |
|
42 |
+
| DEITA-7B-v1.0-sft (6K) | SFT | 6K SFT | 7.22 | 80.78 | -- |
|
43 |
+
| DEITA-7B-v1.0-sft | SFT | 10K SFT | 7.32 | 81.67 | 64.00 |
|
44 |
+
| DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.44 | 89.69 | 70.32 |
|
45 |
+
|
46 |
+
## Input Format
|
47 |
+
|
48 |
+
The model is trained using the [vicuna_v1.1 template](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py)
|
49 |
+
|
50 |
+
```
|
51 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hi!</s>USER: How are you? ASSISTANT:
|
52 |
+
```
|
53 |
|
54 |
### Training hyperparameters
|
55 |
|
|
|
68 |
- lr_scheduler_warmup_ratio: 0.1
|
69 |
- num_epochs: 6.0
|
70 |
|
|
|
|
|
|
|
|
|
71 |
### Framework versions
|
72 |
|
73 |
- Transformers 4.34.1
|