ketchup123's picture
Model save
7f66c40 verified
metadata
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: llama-2-7b-chat-hf-gsm8k-bs32-sharegpt
    results: []

llama-2-7b-chat-hf-gsm8k-bs32-sharegpt

This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7306

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss
0.8458 0.4274 100 0.8449
0.8417 0.8547 200 0.7988
0.8234 1.2821 300 0.7797
0.7774 1.7094 400 0.7670
0.72 2.1368 500 0.7575
0.7188 2.5641 600 0.7498
0.8102 2.9915 700 0.7438
0.7295 3.4188 800 0.7392
0.7591 3.8462 900 0.7351
0.7615 4.2735 1000 0.7323
0.7084 4.7009 1100 0.7306

Framework versions

  • PEFT 0.13.2
  • Transformers 4.45.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3