habedi's picture
Training in progress, step 500
05433ff verified
|
raw
history blame
2.64 kB
metadata
license: mit
base_model: microsoft/deberta-v3-large
tags:
  - generated_from_trainer
model-index:
  - name: deberta-v3-large-kaggle-mlm
    results: []

deberta-v3-large-kaggle-mlm

This model is a fine-tuned version of microsoft/deberta-v3-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3182

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 25
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
3.1114 1.0 6848 2.6616
2.2122 2.0 13696 1.9734
2.0848 3.0 20544 1.9930
1.8056 4.0 27392 1.7167
1.7003 5.0 34240 1.8419
1.6414 6.0 41088 1.5828
1.583 7.0 47936 1.5298
1.5245 8.0 54784 1.4964
1.491 9.0 61632 1.4671
1.4662 10.0 68480 1.4805
1.426 11.0 75328 1.4506
1.3924 12.0 82176 1.4272
1.3797 13.0 89024 1.4092
1.3713 14.0 95872 1.3947
1.3444 15.0 102720 1.3765
1.3414 16.0 109568 1.3636
1.3256 17.0 116416 1.3700
1.3084 18.0 123264 1.3607
1.2925 19.0 130112 1.3428
1.2615 20.0 136960 1.3483
1.2733 21.0 143808 1.3440
1.2809 22.0 150656 1.3314
1.2576 23.0 157504 1.3388
1.2606 24.0 164352 1.3126
1.2608 25.0 171200 1.3211

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1