gemma9_on_korean_summary_events

This model is a fine-tuned version of rtzr/ko-gemma-2-9b-it on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4183

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 5
  • total_train_batch_size: 10
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 400
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.5952 0.1316 20 1.0820
0.9103 0.2632 40 0.7513
0.7022 0.3947 60 0.5833
0.5149 0.5263 80 0.4630
0.4837 0.6579 100 0.4376
0.449 0.7895 120 0.4213
0.431 0.9211 140 0.4080
0.3811 1.0526 160 0.4000
0.3227 1.1842 180 0.3964
0.283 1.3158 200 0.3974
0.2984 1.4474 220 0.3993
0.3102 1.5789 240 0.3851
0.3045 1.7105 260 0.3847
0.3034 1.8421 280 0.3851
0.2779 1.9737 300 0.3793
0.2191 2.1053 320 0.3991
0.1971 2.2368 340 0.4157
0.1908 2.3684 360 0.4209
0.1766 2.5 380 0.4190
0.1749 2.6316 400 0.4183

Framework versions

  • PEFT 0.12.0
  • Transformers 4.43.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
12
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for ghost613/gemma9_on_korean_summary_events

Base model

google/gemma-2-9b
Adapter
(11)
this model
Merges
2 models