Edit model card

swinv2-tiny-patch4-window8-256-dmae-humeda-DA-V2-b

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2943
  • Accuracy: 0.7115

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 42

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9474 9 1.5677 0.25
1.6369 2.0 19 1.5069 0.2885
1.5075 2.9474 28 1.3568 0.4615
1.2712 4.0 38 1.1267 0.5385
1.2712 4.9474 47 1.0141 0.6538
0.9909 6.0 57 1.0726 0.5962
0.8736 6.9474 66 0.9800 0.7115
0.7965 8.0 76 0.8837 0.6731
0.7513 8.9474 85 0.9431 0.6538
0.7513 10.0 95 0.9317 0.6923
0.6997 10.9474 104 0.9871 0.6923
0.5749 12.0 114 0.9986 0.6923
0.5366 12.9474 123 1.1291 0.6346
0.5108 14.0 133 0.9154 0.7115
0.5108 14.9474 142 0.9546 0.6731
0.5113 16.0 152 1.2550 0.6346
0.4594 16.9474 161 1.0490 0.6923
0.458 18.0 171 1.0402 0.7115
0.4068 18.9474 180 1.1223 0.6923
0.4068 20.0 190 1.0961 0.6731
0.3652 20.9474 199 1.1317 0.6731
0.3516 22.0 209 1.1016 0.6923
0.3517 22.9474 218 1.0740 0.6923
0.3428 24.0 228 1.5363 0.5769
0.3428 24.9474 237 1.1124 0.7308
0.3026 26.0 247 1.2449 0.6923
0.3162 26.9474 256 1.1481 0.6923
0.315 28.0 266 1.2540 0.6731
0.315 28.9474 275 1.2073 0.6923
0.2485 30.0 285 1.2773 0.6923
0.2839 30.9474 294 1.4296 0.6538
0.2719 32.0 304 1.2424 0.6923
0.2396 32.9474 313 1.3423 0.75
0.2396 34.0 323 1.2244 0.7115
0.2761 34.9474 332 1.2601 0.7115
0.2411 36.0 342 1.2808 0.7115
0.2566 36.9474 351 1.2784 0.7115
0.2065 38.0 361 1.2426 0.7115
0.2065 38.9474 370 1.2823 0.7115
0.2815 39.7895 378 1.2943 0.7115

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
6
Safetensors
Model size
27.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Augusto777/swinv2-tiny-patch4-window8-256-dmae-humeda-DA-V2-b

Finetuned
(49)
this model