Edit model card

STS-Lora-Fine-Tuning-Capstone-roberta-base-filtered-137-with-higher-r-mid

This model is a fine-tuned version of FacebookAI/roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7478
  • Accuracy: 0.6891

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 113 1.0392 0.4551
No log 2.0 226 1.0228 0.4944
No log 3.0 339 0.9681 0.5225
No log 4.0 452 0.8498 0.6404
0.8868 5.0 565 0.7953 0.6704
0.8868 6.0 678 0.7626 0.6723
0.8868 7.0 791 0.7913 0.6704
0.8868 8.0 904 0.7607 0.6723
0.6501 9.0 1017 0.7982 0.6873
0.6501 10.0 1130 0.7419 0.6685
0.6501 11.0 1243 0.7451 0.6873
0.6501 12.0 1356 0.7471 0.6760
0.6501 13.0 1469 0.7549 0.6816
0.5977 14.0 1582 0.7364 0.6835
0.5977 15.0 1695 0.7431 0.6760
0.5977 16.0 1808 0.7545 0.6742
0.5977 17.0 1921 0.7556 0.6873
0.5673 18.0 2034 0.7427 0.6873
0.5673 19.0 2147 0.7442 0.6873
0.5673 20.0 2260 0.7600 0.6798
0.5673 21.0 2373 0.7381 0.6854
0.5673 22.0 2486 0.7480 0.6873
0.5561 23.0 2599 0.7489 0.6854
0.5561 24.0 2712 0.7481 0.6873
0.5561 25.0 2825 0.7470 0.6873
0.5561 26.0 2938 0.7530 0.6891
0.5381 27.0 3051 0.7455 0.6854
0.5381 28.0 3164 0.7478 0.6891
0.5381 29.0 3277 0.7483 0.6891
0.5381 30.0 3390 0.7478 0.6891

Framework versions

  • PEFT 0.10.0
  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for rajevan123/STS-Lora-Fine-Tuning-Capstone-roberta-base-filtered-137-with-higher-r-mid

Adapter
(104)
this model