zephyr-dpop-qlora-uf-ours-5e-6-epoch1
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the generation/UF dataset. It achieves the following results on the evaluation set:
- Loss: 1.6168
- Positive Losses: 8.8283
- Dpo Losses: 0.6446
- Rewards/chosen: 0.0003
- Rewards/rejected: -0.1296
- Rewards/accuracies: 0.6500
- Rewards/margins: 0.1299
- Rewards/margins Max: 0.6434
- Rewards/margins Min: -0.3494
- Rewards/margins Std: 0.3327
- Logps/rejected: -271.5416
- Logps/chosen: -284.5663
- Logits/rejected: -2.6717
- Logits/chosen: -2.7138
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Positive Losses | Dpo Losses | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.6232 | 0.28 | 100 | 1.1413 | 4.2610 | 0.6656 | 0.0429 | -0.0252 | 0.6230 | 0.0680 | 0.4086 | -0.2281 | 0.2094 | -261.0972 | -280.3080 | -2.6361 | -2.6726 |
0.5625 | 0.56 | 200 | 1.7186 | 9.6677 | 0.6469 | -0.0183 | -0.1426 | 0.6420 | 0.1243 | 0.6362 | -0.3433 | 0.3277 | -272.8399 | -286.4236 | -2.6380 | -2.6780 |
0.4748 | 0.85 | 300 | 1.6048 | 8.7062 | 0.6448 | 0.0014 | -0.1274 | 0.6470 | 0.1288 | 0.6375 | -0.3461 | 0.3295 | -271.3224 | -284.4528 | -2.6733 | -2.7153 |
Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
- Downloads last month
- 3
Model tree for just1nseo/zephyr-dpop-qlora-uf-ours-5e-6-epoch1
Base model
mistralai/Mistral-7B-v0.1
Finetuned
alignment-handbook/zephyr-7b-sft-full