Built with Axolotl

7e10956b-c129-453b-be81-43b9e8a00193

This model is a fine-tuned version of oopsung/llama2-7b-koNqa-test-v1 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8757

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000208
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0001 1 2.8507
1.6873 0.0033 50 1.2731
1.545 0.0065 100 1.2757
1.3988 0.0098 150 1.1874
1.6102 0.0130 200 1.1087
1.3597 0.0163 250 1.0507
1.3285 0.0196 300 0.9809
1.0915 0.0228 350 0.9417
1.2449 0.0261 400 0.8955
1.0734 0.0293 450 0.8762
1.0317 0.0326 500 0.8757

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for lesso08/7e10956b-c129-453b-be81-43b9e8a00193

Adapter
(291)
this model