File size: 3,965 Bytes
917652e 98aa45a 917652e 98aa45a 917652e 98aa45a 917652e 98aa45a 917652e 98aa45a 917652e 98aa45a 917652e 98aa45a 917652e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5274
- Rewards/chosen: -0.0838
- Rewards/rejected: -1.1574
- Rewards/accuracies: 0.7579
- Rewards/margins: 1.0735
- Logps/rejected: -273.2137
- Logps/chosen: -289.3620
- Logits/rejected: -2.7815
- Logits/chosen: -2.7866
- Use Label: 0.0
- Pred Label: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Use Label | Pred Label |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:---------:|:----------:|
| 0.6368 | 0.1 | 100 | 0.6040 | 0.1800 | -0.4000 | 0.6746 | 0.5800 | -265.6400 | -286.7236 | -2.8083 | -2.8249 | 0.0 | 0.0 |
| 0.558 | 0.21 | 200 | 0.5652 | 0.1323 | -0.7862 | 0.7421 | 0.9186 | -269.5020 | -287.2001 | -2.7981 | -2.8081 | 0.0 | 0.0 |
| 0.553 | 0.31 | 300 | 0.5432 | -0.0674 | -1.0423 | 0.7341 | 0.9749 | -272.0630 | -289.1978 | -2.7421 | -2.7517 | 0.0 | 0.0 |
| 0.5019 | 0.42 | 400 | 0.5371 | 0.1229 | -0.9260 | 0.7540 | 1.0490 | -270.9003 | -287.2944 | -2.7871 | -2.7961 | 0.0 | 0.0 |
| 0.5303 | 0.52 | 500 | 0.5362 | 0.0755 | -0.9534 | 0.7381 | 1.0290 | -271.1743 | -287.7682 | -2.7415 | -2.7495 | 0.0 | 0.0 |
| 0.5791 | 0.63 | 600 | 0.5281 | 0.0277 | -1.0275 | 0.7460 | 1.0552 | -271.9149 | -288.2469 | -2.7518 | -2.7595 | 0.0 | 0.0 |
| 0.5238 | 0.73 | 700 | 0.5295 | 0.0341 | -1.0667 | 0.7540 | 1.1008 | -272.3072 | -288.1828 | -2.7262 | -2.7338 | 0.0 | 0.0 |
| 0.515 | 0.84 | 800 | 0.5258 | -0.0054 | -1.1189 | 0.7540 | 1.1135 | -272.8286 | -288.5772 | -2.7479 | -2.7544 | 0.0 | 0.0 |
| 0.5166 | 0.94 | 900 | 0.5273 | -0.0792 | -1.1432 | 0.7619 | 1.0640 | -273.0717 | -289.3157 | -2.7775 | -2.7829 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|