Edit model card

lmind_nq_train6000_eval6489_v1_docidx_v3_3e-5_lora2

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.9207
  • Accuracy: 0.4310

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 50.0

Training results

Training Loss Epoch Step Accuracy Validation Loss
1.4104 1.0 341 0.4537 3.3575
1.389 2.0 683 0.4544 3.4180
1.3414 3.0 1024 0.4548 3.5119
1.3002 4.0 1366 0.4554 3.5288
1.2574 5.0 1707 0.4539 3.6893
1.2258 6.0 2049 0.4562 3.7259
1.1844 7.0 2390 0.4559 3.7244
1.1363 8.0 2732 0.4544 3.8139
1.0903 9.0 3073 0.4524 3.9116
1.0538 10.0 3415 0.4516 3.9220
0.9971 11.0 3756 0.4514 3.9673
0.9699 12.0 4098 0.4508 4.0336
0.9235 13.0 4439 0.4493 4.0020
0.891 14.0 4781 0.4477 4.0716
0.845 15.0 5122 0.4477 4.0992
0.8009 16.0 5464 0.4464 4.0933
0.782 17.0 5805 0.4467 4.1283
0.7294 18.0 6147 0.4456 4.1643
0.6792 19.0 6488 0.4449 4.1859
0.6672 20.0 6830 0.4437 4.2010
0.6258 21.0 7171 0.4429 4.2300
0.599 22.0 7513 0.4419 4.2532
0.5625 23.0 7854 0.4430 4.2937
0.5267 24.0 8196 0.4415 4.2548
0.5004 25.0 8537 0.4404 4.3325
0.4681 26.0 8879 0.4396 4.3162
0.4453 27.0 9220 0.4388 4.3771
0.4161 28.0 9562 0.4386 4.4060
0.3994 29.0 9903 0.4377 4.4688
0.3695 30.0 10245 0.4377 4.4645
0.3505 31.0 10586 0.4378 4.4624
0.3342 32.0 10928 0.4365 4.4630
0.3075 33.0 11269 0.4342 4.5444
0.2949 34.0 11611 0.4344 4.5481
0.2705 35.0 11952 0.4357 4.5614
0.2554 36.0 12294 0.4339 4.5910
0.2428 37.0 12635 0.4332 4.6458
0.2277 38.0 12977 0.4327 4.6553
0.2172 39.0 13318 0.4328 4.7071
0.2016 40.0 13660 0.4331 4.7180
0.1965 41.0 14001 0.4323 4.7568
0.1851 42.0 14343 0.4321 4.7562
0.1739 43.0 14684 0.4317 4.7874
0.1719 44.0 15004 4.8029 0.4323
0.1626 45.0 15346 4.7820 0.4318
0.1535 46.0 15687 4.8637 0.4315
0.1524 47.0 16029 4.8990 0.4315
0.1419 48.0 16370 4.8602 0.4309
0.1405 49.0 16712 4.8813 0.4301
0.134 49.99 17050 4.9207 0.4310

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.14.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .

Finetuned from