slm-2-dpo-full
This model is a fine-tuned version of stabilityai/stablelm-2-zephyr-1_6b on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:
- Loss: 31.9894
- Rewards/chosen: 0.0244
- Rewards/rejected: 0.0188
- Rewards/accuracies: 0.5234
- Rewards/margins: 0.0057
- Logps/rejected: -2491.7576
- Logps/chosen: -2806.6704
- Logits/rejected: -1.6239
- Logits/chosen: -1.6845
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 80
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
16.8403 | 0.13 | 100 | 19.5118 | 0.0256 | 0.0173 | 0.5273 | 0.0082 | -2491.9011 | -2806.5552 | -1.6068 | -1.6730 |
28.1241 | 0.26 | 200 | 32.5175 | 0.0085 | -0.0039 | 0.5234 | 0.0124 | -2494.0195 | -2808.2581 | -1.6183 | -1.6812 |
84.7591 | 0.39 | 300 | 47.8043 | 0.0297 | 0.0136 | 0.5391 | 0.0161 | -2492.2703 | -2806.1406 | -1.5968 | -1.6601 |
40.7835 | 0.52 | 400 | 30.6722 | 0.0168 | -0.0029 | 0.5547 | 0.0197 | -2493.9204 | -2807.4263 | -1.6288 | -1.6917 |
36.2204 | 0.65 | 500 | 31.2202 | 0.0303 | 0.0209 | 0.5352 | 0.0095 | -2491.5447 | -2806.0762 | -1.6236 | -1.6843 |
99.7738 | 0.78 | 600 | 33.7403 | 0.0476 | 0.0372 | 0.5391 | 0.0104 | -2489.9089 | -2804.3484 | -1.6222 | -1.6827 |
41.8506 | 0.92 | 700 | 32.9133 | 0.0301 | 0.0195 | 0.5547 | 0.0106 | -2491.6851 | -2806.1006 | -1.6211 | -1.6823 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.2.0+cu118
- Datasets 2.14.6
- Tokenizers 0.15.2
- Downloads last month
- 13
Inference API (serverless) does not yet support model repos that contain custom code.
Model tree for imelnyk/slm-2-dpo-full
Base model
stabilityai/stablelm-2-zephyr-1_6b