Edit model card

Visualize in Weights & Biases

phi3-offline-dpo-lora-noise-0.0-5e-7-42

This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6914
  • Rewards/chosen: -0.0098
  • Rewards/rejected: -0.0147
  • Rewards/accuracies: 0.5913
  • Rewards/margins: 0.0049
  • Logps/rejected: -385.2070
  • Logps/chosen: -409.5497
  • Logits/rejected: 12.4993
  • Logits/chosen: 14.2906

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.6933 0.1778 100 0.6932 0.0002 -0.0009 0.5278 0.0011 -383.8265 -408.5499 12.5336 14.3223
0.6942 0.3556 200 0.6928 -0.0035 -0.0064 0.6111 0.0029 -384.3740 -408.9154 12.5221 14.3092
0.6933 0.5333 300 0.6918 -0.0073 -0.0121 0.5794 0.0048 -384.9481 -409.3039 12.5135 14.3020
0.6928 0.7111 400 0.6914 -0.0104 -0.0142 0.5714 0.0038 -385.1545 -409.6051 12.4934 14.2847
0.6935 0.8889 500 0.6912 -0.0101 -0.0159 0.5913 0.0058 -385.3226 -409.5751 12.4995 14.2891

Framework versions

  • PEFT 0.7.1
  • Transformers 4.42.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.14.6
  • Tokenizers 0.19.1
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Wenboz/phi3-offline-dpo-lora-noise-0.0-5e-7-42

Adapter
this model