Edit model card

swin-tiny-patch4-window7-224-finetuned-200k

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4347
  • Accuracy: 0.7961

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 512
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.634 0.99 36 0.6243 0.6262
0.5551 1.99 72 0.5186 0.7250
0.5183 2.98 108 0.4826 0.7673
0.4854 4.0 145 0.5640 0.7261
0.4645 4.99 181 0.4598 0.7817
0.4655 5.99 217 0.4787 0.7786
0.4582 6.98 253 0.4483 0.7899
0.4415 8.0 290 0.4709 0.7765
0.4546 8.99 326 0.4717 0.7817
0.4566 9.99 362 0.4538 0.7951
0.4675 10.98 398 0.4491 0.7817
0.4449 12.0 435 0.4992 0.7652
0.4349 12.99 471 0.4627 0.7817
0.4253 13.99 507 0.4492 0.7858
0.4278 14.98 543 0.4442 0.7951
0.4567 16.0 580 0.4362 0.7899
0.4205 16.99 616 0.4550 0.7889
0.4233 17.99 652 0.4336 0.7909
0.4014 18.98 688 0.4565 0.7889
0.4176 20.0 725 0.4323 0.7940
0.411 20.99 761 0.4348 0.7951
0.4128 21.99 797 0.4378 0.7971
0.4045 22.98 833 0.4317 0.7951
0.4001 24.0 870 0.4452 0.7868
0.4061 24.99 906 0.4286 0.7920
0.4033 25.99 942 0.4306 0.7951
0.3953 26.98 978 0.4320 0.7920
0.3924 28.0 1015 0.4338 0.7940
0.4056 28.99 1051 0.4329 0.7930
0.4032 29.79 1080 0.4347 0.7961

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
1
Inference API
Drag image file here or click to browse from your device
This model can be loaded on Inference API (serverless).

Finetuned from

Evaluation results