juyongjiang's picture
update model checkpoint
6ba91f9 verified
|
raw
history blame
No virus
2.05 kB
metadata
library_name: peft
tags:
  - alignment-handbook
  - generated_from_trainer
datasets:
  - llama-duo/synth_summarize_dataset_dedup
base_model: google/gemma-7b
model-index:
  - name: gemma7b-summarize-gpt4o-4k
    results: []

gemma7b-summarize-gpt4o-4k

This model is a fine-tuned version of google/gemma-7b on the llama-duo/synth_summarize_dataset_dedup dataset. It achieves the following results on the evaluation set:

  • Loss: 6.0322

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
44.3098 1.0 7 13.7204
25.7366 2.0 14 8.6916
19.0375 3.0 21 7.6308
18.2973 4.0 28 7.1198
14.8387 5.0 35 6.8470
12.5684 6.0 42 6.8495
9.6308 7.0 49 6.6799
5.7187 8.0 56 6.0818
4.8487 9.0 63 6.0318
4.4303 10.0 70 6.0322

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.0
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.19.1