praysimanjuntak's picture
praysimanjuntak/distilbert-cased-lft
363ce87 verified
metadata
license: apache-2.0
base_model: distilbert/distilbert-base-cased
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: distilbert-cased-lft
    results: []

distilbert-cased-lft

This model is a fine-tuned version of distilbert/distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1017
  • Precision: 0.8722
  • Recall: 0.8905
  • F1: 0.8813
  • Accuracy: 0.9764

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 0.4065 100 0.1106 0.8374 0.7969 0.8167 0.9647
No log 0.8130 200 0.0926 0.8242 0.8474 0.8356 0.9690
No log 1.2195 300 0.0898 0.8325 0.8671 0.8494 0.9704
No log 1.6260 400 0.0873 0.8591 0.8614 0.8602 0.9729
0.0943 2.0325 500 0.0829 0.8563 0.8765 0.8662 0.9740
0.0943 2.4390 600 0.0864 0.8656 0.8747 0.8701 0.9746
0.0943 2.8455 700 0.0842 0.8652 0.8761 0.8706 0.9746
0.0943 3.2520 800 0.0875 0.8627 0.8823 0.8724 0.9746
0.0943 3.6585 900 0.0887 0.8564 0.8829 0.8694 0.9744
0.0444 4.0650 1000 0.0875 0.8801 0.8797 0.8799 0.9763
0.0444 4.4715 1100 0.0944 0.8516 0.8901 0.8704 0.9746
0.0444 4.8780 1200 0.0906 0.8607 0.8891 0.8746 0.9752
0.0444 5.2846 1300 0.0934 0.8706 0.8896 0.8800 0.9765
0.0444 5.6911 1400 0.0914 0.8784 0.8862 0.8823 0.9765
0.0248 6.0976 1500 0.0918 0.8796 0.8896 0.8846 0.9772
0.0248 6.5041 1600 0.0960 0.8711 0.8916 0.8812 0.9765
0.0248 6.9106 1700 0.0970 0.8678 0.8876 0.8776 0.9763
0.0248 7.3171 1800 0.1008 0.8690 0.8887 0.8787 0.9759
0.0248 7.7236 1900 0.1012 0.8650 0.8926 0.8786 0.9759
0.0153 8.1301 2000 0.1002 0.8715 0.8921 0.8817 0.9762
0.0153 8.5366 2100 0.1003 0.8749 0.8889 0.8818 0.9763
0.0153 8.9431 2200 0.1015 0.8680 0.8917 0.8797 0.9760
0.0153 9.3496 2300 0.1015 0.8716 0.8882 0.8798 0.9764
0.0153 9.7561 2400 0.1017 0.8722 0.8905 0.8813 0.9764

Framework versions

  • Transformers 4.41.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1