JBhug's picture
End of training
70f0186 verified
metadata
license: mit
base_model: roberta-large
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: roberta-base_brkfst_trainer
    results: []

roberta-base_brkfst_trainer

This model is a fine-tuned version of roberta-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0782
  • Accuracy: 0.98

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 27
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6559 0.71 10 0.6907 0.8
0.5465 1.43 20 0.3822 0.84
0.4035 2.14 30 0.3178 0.93
0.3651 2.86 40 0.1117 0.94
0.1986 3.57 50 0.1832 0.95
0.2985 4.29 60 0.1133 0.96
0.141 5.0 70 0.1594 0.97
0.1334 5.71 80 0.2771 0.96
0.1874 6.43 90 0.0757 0.95
0.0594 7.14 100 0.2082 0.95
0.2883 7.86 110 0.2366 0.96
0.0459 8.57 120 0.1599 0.96
0.0733 9.29 130 0.2568 0.96
0.0483 10.0 140 0.3639 0.94
0.0487 10.71 150 0.2121 0.97
0.0339 11.43 160 0.1368 0.98
0.0463 12.14 170 0.1465 0.98
0.0025 12.86 180 0.2487 0.96
0.0009 13.57 190 0.0968 0.98
0.0003 14.29 200 0.0710 0.98
0.0003 15.0 210 0.0782 0.98

Framework versions

  • Transformers 4.39.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2