File size: 10,261 Bytes
bc7e10c
 
 
 
 
 
e56b02a
bc7e10c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48d36f2
bc7e10c
 
 
 
 
 
 
 
5bf681c
bc7e10c
79b1c0b
bc7e10c
 
 
 
201cb64
 
bc7e10c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3d1186
bc7e10c
 
 
 
 
7c8b487
bc7e10c
 
 
72bc7e4
7c8b487
 
 
 
bc7e10c
 
 
 
 
0cd93b5
18fd9a0
5c3c083
bc7e10c
 
 
72bc7e4
27133b0
bc7e10c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3da1353
3131f94
9c052b0
 
0f58247
 
 
f74bac7
0f58247
 
9c052b0
 
3131f94
3da1353
f26880b
f74bac7
3da1353
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3131f94
f74bac7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc7e10c
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
---
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
- meta-math/MetaMathQA
- WizardLM/WizardLM_evol_instruct_V2_196k
- Intel/orca_dpo_pairs
language:
- en
tags:
- causal-lm
extra_gated_fields:
  Name: text
  Email: text
  Country: text
  Organization or Affiliation: text
  I ALLOW Stability AI to email me about new model releases: checkbox
---
# `Stable Zephyr 3B`

## Model Description

`Stable Zephyr 3B` is a 3 billion parameter instruction tuned inspired by [HugginFaceH4's Zephyr 7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) training pipeline this model was trained on a mix of publicly available datasets, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290), evaluation for this model based on
[MT Bench](https://tatsu-lab.github.io/alpaca_eval/) and [Alpaca Benchmark](https://tatsu-lab.github.io/alpaca_eval/)

## Usage

Get started generating text with `Stable Zephyr 3B` by using the following code snippet:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-zephyr-3b-dpo")
model = AutoModelForCausalLM.from_pretrained(
  "stable-zephyr-3b",
  trust_remote_code=True,
  torch_dtype="auto",
)
model.cuda()
prompt = "<|user|>\nIn the field of quantum physics, what is superposition, and how does it relate to the phenomenon of quantum entanglement?<|endoftext|>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
tokens = model.generate(
  **inputs,
  max_new_tokens=1024,
  temperature=0.7,
  top_p=0.95,
  do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```

## Model Details

* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `Stable Zephyr 3B` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: English
* **Library**: [Alignment Handbook](https://github.com/huggingface/alignment-handbook.git)
* **Finetuned from model**: [stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t)
* **License**: TBD
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`

### Training Dataset

The dataset is comprised of a mixture of open datasets large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets):
1. SFT Datasets
- HuggingFaceH4/ultrachat_200k
- meta-math/MetaMathQA
- Wizard Dataset
- Open-Orca/SlimOrca
2. Preference Datasets:
- HuggingFaceH4/ultrafeedback_binarized
- Intel/orca_dpo_pairs


### Training Procedure

## Performance

### MT Bench and Alpaca Bench

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6310474ca119d49bc1eb0d80/XRmo7zxSsFWPez3wUTLDS.png)

| Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
|-------------|-----|----|---------------|--------------|
| **Stable Zephyr 3B** 🪁 | 3B | DPO | 6.64 | 76.00 |
| Stable Zephyr (SFT only) | 3B | SFT | 6.04 | 71.15 | 
| MPT-Chat |  7B |dSFT |5.42| -|
| Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83|
| Mistral-Instructv0.1 | 7B|  - | 6.84 |-|
| Zephyr-7b-α |7B|  dDPO| 6.88| -|
| Zephyr-7b-β| 7B | dDPO | 7.34 | 90.60 |
| Falcon-Instruct |  40B |dSFT |5.17 |45.71|
| Guanaco | 65B |  SFT |6.41| 71.80|
| Llama2-Chat |  70B |RLHF |6.86| 92.66|
| Vicuna v1.3 |  33B |dSFT |7.12 |88.99|
| WizardLM v1.0 |  70B |dSFT |7.71 |-|
| Xwin-LM v0.1 |   70B |dPPO |- |95.57|
| GPT-3.5-turbo | - |RLHF |7.94 |89.37|
| Claude 2 |  - |RLHF |8.06| 91.36|
| GPT-4 |  -| RLHF |8.99| 95.28|

## Other benchmark:
1. **HuggingFace OpenLLM Leaderboard**
| Metric                | Value                     |
|-----------------------|---------------------------|
| ARC (25-shot)         |  47.0       |
| HellaSwag (10-shot)   | 74.2    |
| MMLU (5-shot)        |   46.3     |
| TruthfulQA (0-shot)   |   46.5 |
| Winogrande (5-shot)   |   65.5 |
| GSM8K (5-shot)        | 42.3        |


2. **BigBench**:

- Average: 35.26
- Details: 

| Task                                                | Version | Metric                  | Value | Stderr |
|-----------------------------------------------------|---------|-------------------------|-------|--------|
| bigbench_causal_judgement                           | 0       | multiple_choice_grade   | 0.5316| 0.0363 |
| bigbench_date_understanding                         | 0       | multiple_choice_grade   | 0.4363| 0.0259 |
| bigbench_disambiguation_qa                          | 0       | multiple_choice_grade   | 0.3217| 0.0291 |
| bigbench_dyck_languages                             | 0       | multiple_choice_grade   | 0.1450| 0.0111 |
| bigbench_formal_fallacies_syllogisms_negation       | 0       | multiple_choice_grade   | 0.4982| 0.0042 |
| bigbench_geometric_shapes                           | 0       | multiple_choice_grade   | 0.1086| 0.0164 |
| bigbench_hyperbaton                                 | 0       | exact_str_match         | 0.0000| 0.0000 |
| bigbench_logical_deduction_five_objects             | 0       | multiple_choice_grade   | 0.5232| 0.0022 |
| bigbench_logical_deduction_seven_objects            | 0       | multiple_choice_grade   | 0.2480| 0.0193 |
| bigbench_logical_deduction_three_objects            | 0       | multiple_choice_grade   | 0.1814| 0.0146 |
| bigbench_movie_recommendation                       | 0       | multiple_choice_grade   | 0.4067| 0.0284 |
| bigbench_navigate                                   | 0       | multiple_choice_grade   | 0.2580| 0.0196 |
| bigbench_reasoning_about_colored_objects            | 0       | multiple_choice_grade   | 0.5990| 0.0155 |
| bigbench_ruin_names                                 | 0       | multiple_choice_grade   | 0.4370| 0.0111 |
| bigbench_salient_translation_error_detection        | 0       | multiple_choice_grade   | 0.3951| 0.0231 |
| bigbench_snarks                                     | 0       | multiple_choice_grade   | 0.2265| 0.0133 |
| bigbench_sports_understanding                       | 0       | multiple_choice_grade   | 0.6464| 0.0356 |
| bigbench_temporal_sequences                         | 0       | multiple_choice_grade   | 0.5091| 0.0159 |
| bigbench_tracking_shuffled_objects_five_objects     | 0       | multiple_choice_grade   | 0.2680| 0.0140 |
| bigbench_tracking_shuffled_objects_seven_objects    | 0       | multiple_choice_grade   | 0.1856| 0.0110 |
| bigbench_tracking_shuffled_objects_three_objects    | 0       | multiple_choice_grade   | 0.1269| 0.0080 |

3. **AGI Benchmark**:
- Average: 33.23
- Details:
|             Task             |Version| Metric |Value |   |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat              |      0|acc     |0.2126|±  |0.0257|
|                              |       |acc_norm|0.1890|±  |0.0246|
|agieval_gaokao_biology        |      0|acc     |0.2571|±  |0.0302|
|                              |       |acc_norm|0.3143|±  |0.0321|
|agieval_gaokao_chemistry      |      0|acc     |0.2464|±  |0.0300|
|                              |       |acc_norm|0.2899|±  |0.0316|
|agieval_gaokao_chinese        |      0|acc     |0.2927|±  |0.0291|
|                              |       |acc_norm|0.3049|±  |0.0294|
|agieval_gaokao_english        |      0|acc     |0.6176|±  |0.0278|
|                              |       |acc_norm|0.6438|±  |0.0274|
|agieval_gaokao_geography      |      0|acc     |0.3015|±  |0.0326|
|                              |       |acc_norm|0.3065|±  |0.0328|
|agieval_gaokao_history        |      0|acc     |0.3106|±  |0.0303|
|                              |       |acc_norm|0.3319|±  |0.0308|
|agieval_gaokao_mathqa         |      0|acc     |0.2650|±  |0.0236|
|                              |       |acc_norm|0.2707|±  |0.0237|
|agieval_gaokao_physics        |      0|acc     |0.3450|±  |0.0337|
|                              |       |acc_norm|0.3550|±  |0.0339|
|agieval_logiqa_en             |      0|acc     |0.2980|±  |0.0179|
|                              |       |acc_norm|0.3195|±  |0.0183|
|agieval_logiqa_zh             |      0|acc     |0.2842|±  |0.0177|
|                              |       |acc_norm|0.3318|±  |0.0185|
|agieval_lsat_ar               |      0|acc     |0.2000|±  |0.0264|
|                              |       |acc_norm|0.2043|±  |0.0266|
|agieval_lsat_lr               |      0|acc     |0.3176|±  |0.0206|
|                              |       |acc_norm|0.3275|±  |0.0208|
|agieval_lsat_rc               |      0|acc     |0.4312|±  |0.0303|
|                              |       |acc_norm|0.4201|±  |0.0301|
|agieval_sat_en                |      0|acc     |0.6117|±  |0.0340|
|                              |       |acc_norm|0.6117|±  |0.0340|
|agieval_sat_en_without_passage|      0|acc     |0.3398|±  |0.0331|
|                              |       |acc_norm|0.3495|±  |0.0333|
|agieval_sat_math              |      0|acc     |0.3182|±  |0.0315|
|                              |       |acc_norm|0.2909|±  |0.0307|

### Training Infrastructure

* **Hardware**: `Stable Zephyr 3B` was trained on the Stability AI cluster across 8 nodes with 8 A100 80GBs GPUs for each nodes.
* **Code Base**: We use our internal script for SFT steps and used [HuggingFace Alignment Handbook script](https://github.com/huggingface/alignment-handbook) for DPO training.
## Use and Limitations

### Intended Use

The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications.

### Limitations and Bias

As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.