vit-msn-small-beta-fia-equally-enhanced_test_1

This model is a fine-tuned version of facebook/vit-msn-small on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6061
  • Accuracy: 0.8732

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 100
  • label_smoothing_factor: 0.1

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.5714 1 1.5578 0.0704
No log 1.7143 3 1.4950 0.0634
No log 2.8571 5 1.3574 0.0634
No log 4.0 7 1.1698 0.1268
No log 4.5714 8 1.0682 0.3169
1.5036 5.7143 10 0.8754 0.7958
1.5036 6.8571 12 0.7359 0.8239
1.5036 8.0 14 0.6782 0.8169
1.5036 8.5714 15 0.6718 0.8169
1.5036 9.7143 17 0.6821 0.8099
1.5036 10.8571 19 0.7157 0.8028
0.7486 12.0 21 0.7173 0.8099
0.7486 12.5714 22 0.6967 0.8169
0.7486 13.7143 24 0.6847 0.8169
0.7486 14.8571 26 0.6827 0.8239
0.7486 16.0 28 0.6959 0.8380
0.7486 16.5714 29 0.6826 0.8521
0.6547 17.7143 31 0.6360 0.8310
0.6547 18.8571 33 0.6257 0.8521
0.6547 20.0 35 0.6594 0.8732
0.6547 20.5714 36 0.6784 0.8380
0.6547 21.7143 38 0.6578 0.8521
0.5817 22.8571 40 0.6146 0.8592
0.5817 24.0 42 0.6212 0.8732
0.5817 24.5714 43 0.6395 0.8732
0.5817 25.7143 45 0.6452 0.8732
0.5817 26.8571 47 0.6317 0.8803
0.5817 28.0 49 0.6332 0.8803
0.5632 28.5714 50 0.6418 0.8732
0.5632 29.7143 52 0.6383 0.8803
0.5632 30.8571 54 0.6367 0.8592
0.5632 32.0 56 0.6253 0.8732
0.5632 32.5714 57 0.6268 0.8592
0.5632 33.7143 59 0.6234 0.8662
0.5328 34.8571 61 0.6368 0.8521
0.5328 36.0 63 0.6251 0.8592
0.5328 36.5714 64 0.6184 0.8732
0.5328 37.7143 66 0.6067 0.8732
0.5328 38.8571 68 0.6182 0.8662
0.5272 40.0 70 0.6398 0.8451
0.5272 40.5714 71 0.6440 0.8310
0.5272 41.7143 73 0.6318 0.8451
0.5272 42.8571 75 0.6111 0.8732
0.5272 44.0 77 0.6061 0.8732
0.5272 44.5714 78 0.6116 0.8732
0.5255 45.7143 80 0.6320 0.8451
0.5255 46.8571 82 0.6394 0.8310
0.5255 48.0 84 0.6379 0.8310
0.5255 48.5714 85 0.6363 0.8310
0.5255 49.7143 87 0.6282 0.8521
0.5255 50.8571 89 0.6214 0.8592
0.52 52.0 91 0.6195 0.8592
0.52 52.5714 92 0.6170 0.8662
0.52 53.7143 94 0.6169 0.8592
0.52 54.8571 96 0.6174 0.8592
0.52 56.0 98 0.6187 0.8592
0.52 56.5714 99 0.6193 0.8592
0.504 57.1429 100 0.6194 0.8592

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
21.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Melo1512/vit-msn-small-beta-fia-equally-enhanced_test_1

Finetuned
(25)
this model

Evaluation results