Edit model card

Llama-2-7b-chat-hf-finetune_90_10_EX

This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3709

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 3
  • eval_batch_size: 3
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: reduce_lr_on_plateau
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6119 1.0 99 0.9019
0.2875 2.0 198 0.9807
0.1031 3.0 297 1.0508
0.0889 4.0 396 1.0963
0.0561 5.0 495 1.1435
0.0535 6.0 594 1.1806
0.0526 7.0 693 1.2156
0.0654 8.0 792 1.2466
0.055 9.0 891 1.2079
0.0681 10.0 990 1.2817
0.0735 11.0 1089 1.2821
0.0587 12.0 1188 1.2947
0.0467 13.0 1287 1.3097
0.0451 14.0 1386 1.3194
0.0438 15.0 1485 1.3266
0.0483 16.0 1584 1.3324
0.0408 17.0 1683 1.3386
0.044 18.0 1782 1.3436
0.045 19.0 1881 1.3477
0.0446 20.0 1980 1.3518
0.0463 21.0 2079 1.3555
0.0439 22.0 2178 1.3583
0.0458 23.0 2277 1.3601
0.0424 24.0 2376 1.3610
0.0442 25.0 2475 1.3620
0.0413 26.0 2574 1.3626
0.0443 27.0 2673 1.3634
0.0426 28.0 2772 1.3643
0.0432 29.0 2871 1.3652
0.0443 30.0 2970 1.3660
0.0444 31.0 3069 1.3668
0.0423 32.0 3168 1.3678
0.0419 33.0 3267 1.3685
0.0465 34.0 3366 1.3695
0.0443 35.0 3465 1.3695
0.0443 36.0 3564 1.3697
0.0442 37.0 3663 1.3698
0.0444 38.0 3762 1.3699
0.0446 39.0 3861 1.3700
0.0414 40.0 3960 1.3702
0.0429 41.0 4059 1.3702
0.0429 42.0 4158 1.3703
0.0436 43.0 4257 1.3704
0.0442 44.0 4356 1.3706
0.0444 45.0 4455 1.3707
0.0445 46.0 4554 1.3708
0.0414 47.0 4653 1.3708
0.0426 48.0 4752 1.3708
0.0432 49.0 4851 1.3707
0.0443 50.0 4950 1.3709

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for CarlosPov/Llama-2-7b-chat-hf-finetune_90_10_EX

Adapter
(1034)
this model