distilbert-finetuned-lr1e-07-epochs15

This model is a fine-tuned version of distilbert-base-cased-distilled-squad on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 5.7865

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-07
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 10 6.3794
No log 2.0 20 6.2819
No log 3.0 30 6.1965
No log 4.0 40 6.1216
No log 5.0 50 6.0546
No log 6.0 60 6.0016
No log 7.0 70 5.9531
No log 8.0 80 5.9108
No log 9.0 90 5.8776
No log 10.0 100 5.8491
No log 11.0 110 5.8264
No log 12.0 120 5.8084
No log 13.0 130 5.7962
No log 14.0 140 5.7891
No log 15.0 150 5.7865

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.