silviasapora's picture
End of training
4eb1d08 verified
metadata
library_name: transformers
license: gemma
base_model: google/gemma-7b
tags:
  - alignment-handbook
  - trl
  - orpo
  - generated_from_trainer
  - trl
  - orpo
  - alignment-handbook
  - generated_from_trainer
datasets:
  - silviasapora/low_quality_dpo7k
model-index:
  - name: gemma-7b-orpo-low-quality
    results: []

gemma-7b-orpo-low-quality

This model is a fine-tuned version of google/gemma-7b on the silviasapora/low_quality_dpo7k dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5517
  • Rewards/chosen: -0.0554
  • Rewards/rejected: -0.0646
  • Rewards/accuracies: 0.5612
  • Rewards/margins: 0.0092
  • Logps/rejected: -1.2920
  • Logps/chosen: -1.1085
  • Logits/rejected: 268.0282
  • Logits/chosen: 297.1682
  • Nll Loss: 1.4855
  • Log Odds Ratio: -0.6970
  • Log Odds Chosen: 0.2856

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: inverse_sqrt
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen Nll Loss Log Odds Ratio Log Odds Chosen
1.436 0.9955 167 1.4679 -0.0508 -0.0571 0.5468 0.0063 -1.1420 -1.0158 288.9292 318.3812 1.4121 -0.6895 0.1983
1.1098 1.9970 335 1.4451 -0.0518 -0.0579 0.5468 0.0061 -1.1581 -1.0353 286.4312 315.0296 1.3839 -0.7228 0.2105
0.5921 2.9866 501 1.5517 -0.0554 -0.0646 0.5612 0.0092 -1.2920 -1.1085 268.0282 297.1682 1.4855 -0.6970 0.2856

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1