File size: 6,168 Bytes
6d2201e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
phi-2-upscaled-4B-instruct-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/daekeun-ml/
- Original model: https://huggingface.co/daekeun-ml/phi-2-upscaled-4B-instruct-v0.1/
Original model description:
---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- Intel/orca_dpo_pairs
- wikipedia
- Open-Orca/OpenOrca
inference: false
---
# phi-2-upscaled-4B-instruct-v0.1
## Model Details
This model is a model that performed continued pre-training and fine-tuning (instruction tuning) using the depth up-scaling (DUS) technique disclosed by Upstage.
### DUS(Depth Up-Scaling) and continued pre-training
Similar to the methodology disclosed in the paper, we expanded from 32 transformer blocks to 48 blocks and then continued pre-training with the public dataset. Pre-training was performed for 3 days using 4 `ml.g5.48xlarge` instances from AWS (NVIDIA A10G GPU x 32ea). For pre-training, we used a sample set from Wikipedia.
Note that performance is not guaranteed since only a small number of datasets were used for the experiment. The number of samples for training set is just around 1.5 million after tokenization.
For distributed training, all weights were trained without adapter techniques, and sharding parallelization was performed with ZeRO-2. The presets are as follows.
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
Some hyperparameters are listed below.
```
batch_size: 2
num_epochs: 1
learning_rate: 3e-4
gradient_accumulation_steps: 8
lr_scheduler_type: "linear"
group_by_length: False
```
### Fine-tuning
After performing pre-training, instruction tuning and alignment tuning were performed sequentially. This process only took about 10 hours using AWS `ml.g5.24xlarge` (NVIDIA A10G GPU x 4ea). The dataset used for instruction tuning is a sample set of the OpenOrca dataset, and the dataset used for alignment tuning is Intel's orca_dpo_pairs dataset.
All fine-tuning was learned using QLoRA, and the batch sizes were set to 3 and 1, respectively. We used 1,024 for the context length. 2,048 is also possible, but applying DPO often runs out of memory on 24GB GPU memory, so we settled on 1,024.
Please see below for relevant code snippets.
```python
peft_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj", "k_proj", "v_proj", "fc1", "fc2"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
training_arguments = TrainingArguments(
output_dir="logs",
num_train_epochs=1,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=4,
optim="paged_adamw_8bit",
learning_rate=3e-4,
weight_decay=0.001,
bf16=True,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="cosine",
report_to="wandb", ...
)
```
### References
- Base model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
- Paper: [SOLAR 10.7B](https://arxiv.org/abs/2312.15166)
## How to Get Started with the Model
Since this model used ChatGPT's ChatML template, <im_start> and <im_end> tokens were added.
You can use Hugging Face's chat template to create the prompt, but you can also create the prompt yourself with the code snippet below.
```python
def create_inference_prompt(text):
string = f"""<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{text}<|im_end|>
<|im_start|>assistant
"""
return string
```
If you want to simply see the inference results, please use the code snippet below.
```python
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
import torch
torch.set_default_device("cuda")
model_path = "daekeun-ml/phi-2-upscaled-4B-instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
model_path,
use_fast=True,
trust_remote_code=True
)
# Format prompt
message = [
{"role": "system", "content": "You are a helpful AI assistant. Generate appropriate answers to given questions."},
{"role": "user", "content": "What is a Large Language Model?"}
]
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=True, top_p=0.9, temperature=0.5, repetition_penalty=1.2)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Notes
### License
Apache 2.0; The license of phi-2 is MIT, but the license of the orca dataset used for training is apache 2.0.
### Caution
This model was created as a personal experiment, unrelated to the organization I work for. The model may not operate correctly because separate verification was not performed. Please be careful unless it is for personal experimentation or PoC (Proof of Concept)!
|