microsoft-swinv2-base-patch4-window16-256-batch32-lr0.005-standford-dogs
This model is a fine-tuned version of microsoft/swinv2-base-patch4-window16-256 on the stanford-dogs dataset. It achieves the following results on the evaluation set:
- Loss: 0.1854
- Accuracy: 0.9480
- F1: 0.9460
- Precision: 0.9499
- Recall: 0.9463
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
4.7451 | 0.0777 | 10 | 4.6183 | 0.0717 | 0.0618 | 0.0669 | 0.0681 |
4.5204 | 0.1553 | 20 | 4.3242 | 0.2024 | 0.1493 | 0.1858 | 0.1827 |
4.2163 | 0.2330 | 30 | 3.8514 | 0.3817 | 0.3108 | 0.3755 | 0.3598 |
3.5996 | 0.3107 | 40 | 2.9936 | 0.6025 | 0.5397 | 0.5985 | 0.5852 |
2.7565 | 0.3883 | 50 | 1.8902 | 0.7738 | 0.7419 | 0.8002 | 0.7599 |
1.9695 | 0.4660 | 60 | 1.2027 | 0.8644 | 0.8512 | 0.8810 | 0.8585 |
1.4292 | 0.5437 | 70 | 0.8375 | 0.8902 | 0.8768 | 0.9034 | 0.8853 |
1.1191 | 0.6214 | 80 | 0.5400 | 0.9140 | 0.9085 | 0.9209 | 0.9114 |
0.9249 | 0.6990 | 90 | 0.4183 | 0.9193 | 0.9136 | 0.9284 | 0.9169 |
0.7701 | 0.7767 | 100 | 0.3423 | 0.9237 | 0.9167 | 0.9265 | 0.9207 |
0.7036 | 0.8544 | 110 | 0.3141 | 0.9259 | 0.9199 | 0.9270 | 0.9228 |
0.7279 | 0.9320 | 120 | 0.2814 | 0.9261 | 0.9200 | 0.9301 | 0.9235 |
0.6732 | 1.0097 | 130 | 0.2583 | 0.9278 | 0.9258 | 0.9337 | 0.9264 |
0.5251 | 1.0874 | 140 | 0.2433 | 0.9388 | 0.9343 | 0.9400 | 0.9365 |
0.506 | 1.1650 | 150 | 0.2486 | 0.9293 | 0.9237 | 0.9393 | 0.9284 |
0.4941 | 1.2427 | 160 | 0.2489 | 0.9295 | 0.9276 | 0.9340 | 0.9276 |
0.493 | 1.3204 | 170 | 0.2256 | 0.9361 | 0.9337 | 0.9402 | 0.9344 |
0.4975 | 1.3981 | 180 | 0.2236 | 0.9390 | 0.9352 | 0.9430 | 0.9377 |
0.4742 | 1.4757 | 190 | 0.2291 | 0.9390 | 0.9349 | 0.9443 | 0.9368 |
0.4788 | 1.5534 | 200 | 0.2187 | 0.9385 | 0.9348 | 0.9429 | 0.9359 |
0.4817 | 1.6311 | 210 | 0.2194 | 0.9383 | 0.9366 | 0.9438 | 0.9370 |
0.425 | 1.7087 | 220 | 0.2145 | 0.9395 | 0.9365 | 0.9419 | 0.9374 |
0.4392 | 1.7864 | 230 | 0.2106 | 0.9405 | 0.9367 | 0.9473 | 0.9390 |
0.4295 | 1.8641 | 240 | 0.2031 | 0.9427 | 0.9415 | 0.9461 | 0.9419 |
0.447 | 1.9417 | 250 | 0.2073 | 0.9373 | 0.9341 | 0.9406 | 0.9355 |
0.4718 | 2.0194 | 260 | 0.2073 | 0.9417 | 0.9398 | 0.9436 | 0.9396 |
0.4528 | 2.0971 | 270 | 0.2011 | 0.9427 | 0.9403 | 0.9447 | 0.9401 |
0.3958 | 2.1748 | 280 | 0.1979 | 0.9439 | 0.9402 | 0.9467 | 0.9418 |
0.4325 | 2.2524 | 290 | 0.1993 | 0.9422 | 0.9396 | 0.9448 | 0.9404 |
0.3228 | 2.3301 | 300 | 0.2025 | 0.9397 | 0.9372 | 0.9415 | 0.9375 |
0.383 | 2.4078 | 310 | 0.2032 | 0.9424 | 0.9396 | 0.9471 | 0.9407 |
0.4147 | 2.4854 | 320 | 0.1975 | 0.9434 | 0.9401 | 0.9466 | 0.9418 |
0.3587 | 2.5631 | 330 | 0.2048 | 0.9429 | 0.9412 | 0.9453 | 0.9415 |
0.3481 | 2.6408 | 340 | 0.2110 | 0.9417 | 0.9409 | 0.9453 | 0.9414 |
0.4007 | 2.7184 | 350 | 0.1945 | 0.9448 | 0.9415 | 0.9470 | 0.9429 |
0.3719 | 2.7961 | 360 | 0.2025 | 0.9414 | 0.9404 | 0.9447 | 0.9408 |
0.3993 | 2.8738 | 370 | 0.2012 | 0.9448 | 0.9419 | 0.9485 | 0.9430 |
0.3745 | 2.9515 | 380 | 0.1924 | 0.9451 | 0.9415 | 0.9499 | 0.9435 |
0.3638 | 3.0291 | 390 | 0.1940 | 0.9444 | 0.9424 | 0.9478 | 0.9424 |
0.3421 | 3.1068 | 400 | 0.1897 | 0.9466 | 0.9441 | 0.9496 | 0.9446 |
0.2906 | 3.1845 | 410 | 0.1893 | 0.9470 | 0.9457 | 0.9494 | 0.9457 |
0.3455 | 3.2621 | 420 | 0.1802 | 0.9485 | 0.9471 | 0.9499 | 0.9475 |
0.3338 | 3.3398 | 430 | 0.1926 | 0.9441 | 0.9414 | 0.9473 | 0.9424 |
0.3307 | 3.4175 | 440 | 0.2020 | 0.9419 | 0.9407 | 0.9447 | 0.9409 |
0.367 | 3.4951 | 450 | 0.1934 | 0.9466 | 0.9452 | 0.9487 | 0.9454 |
0.3248 | 3.5728 | 460 | 0.2004 | 0.9419 | 0.9393 | 0.9443 | 0.9401 |
0.3366 | 3.6505 | 470 | 0.1924 | 0.9431 | 0.9410 | 0.9467 | 0.9415 |
0.3342 | 3.7282 | 480 | 0.1938 | 0.9453 | 0.9436 | 0.9468 | 0.9438 |
0.3386 | 3.8058 | 490 | 0.2018 | 0.9444 | 0.9428 | 0.9469 | 0.9430 |
0.3841 | 3.8835 | 500 | 0.1933 | 0.9434 | 0.9414 | 0.9458 | 0.9418 |
0.3174 | 3.9612 | 510 | 0.1902 | 0.9453 | 0.9438 | 0.9466 | 0.9436 |
0.2996 | 4.0388 | 520 | 0.1888 | 0.9466 | 0.9454 | 0.9497 | 0.9460 |
0.2879 | 4.1165 | 530 | 0.1885 | 0.9441 | 0.9428 | 0.9464 | 0.9428 |
0.3035 | 4.1942 | 540 | 0.1909 | 0.9453 | 0.9434 | 0.9475 | 0.9437 |
0.2574 | 4.2718 | 550 | 0.1886 | 0.9453 | 0.9427 | 0.9476 | 0.9438 |
0.3219 | 4.3495 | 560 | 0.1889 | 0.9434 | 0.9411 | 0.9462 | 0.9417 |
0.2827 | 4.4272 | 570 | 0.1896 | 0.9448 | 0.9435 | 0.9464 | 0.9434 |
0.2869 | 4.5049 | 580 | 0.1946 | 0.9444 | 0.9430 | 0.9459 | 0.9427 |
0.3442 | 4.5825 | 590 | 0.1871 | 0.9458 | 0.9444 | 0.9477 | 0.9445 |
0.2739 | 4.6602 | 600 | 0.1881 | 0.9441 | 0.9415 | 0.9470 | 0.9421 |
0.3067 | 4.7379 | 610 | 0.1925 | 0.9475 | 0.9456 | 0.9499 | 0.9456 |
0.2674 | 4.8155 | 620 | 0.1919 | 0.9429 | 0.9405 | 0.9458 | 0.9408 |
0.3029 | 4.8932 | 630 | 0.1870 | 0.9446 | 0.9420 | 0.9468 | 0.9425 |
0.293 | 4.9709 | 640 | 0.1914 | 0.9422 | 0.9398 | 0.9444 | 0.9402 |
0.3242 | 5.0485 | 650 | 0.1906 | 0.9444 | 0.9428 | 0.9463 | 0.9429 |
0.3302 | 5.1262 | 660 | 0.1893 | 0.9453 | 0.9437 | 0.9467 | 0.9439 |
0.2754 | 5.2039 | 670 | 0.1859 | 0.9470 | 0.9452 | 0.9489 | 0.9453 |
0.2794 | 5.2816 | 680 | 0.1876 | 0.9458 | 0.9441 | 0.9473 | 0.9442 |
0.3015 | 5.3592 | 690 | 0.1870 | 0.9463 | 0.9450 | 0.9481 | 0.9451 |
0.2741 | 5.4369 | 700 | 0.1891 | 0.9427 | 0.9415 | 0.9447 | 0.9414 |
0.2856 | 5.5146 | 710 | 0.1898 | 0.9456 | 0.9439 | 0.9470 | 0.9439 |
0.2869 | 5.5922 | 720 | 0.1900 | 0.9463 | 0.9449 | 0.9485 | 0.9448 |
0.2874 | 5.6699 | 730 | 0.1926 | 0.9458 | 0.9434 | 0.9489 | 0.9439 |
0.1988 | 5.7476 | 740 | 0.1883 | 0.9453 | 0.9427 | 0.9469 | 0.9433 |
0.2644 | 5.8252 | 750 | 0.1895 | 0.9473 | 0.9448 | 0.9494 | 0.9455 |
0.2641 | 5.9029 | 760 | 0.1931 | 0.9439 | 0.9414 | 0.9466 | 0.9421 |
0.2391 | 5.9806 | 770 | 0.1925 | 0.9439 | 0.9414 | 0.9460 | 0.9421 |
0.2601 | 6.0583 | 780 | 0.1922 | 0.9466 | 0.9446 | 0.9485 | 0.9450 |
0.2499 | 6.1359 | 790 | 0.1921 | 0.9461 | 0.9443 | 0.9480 | 0.9443 |
0.264 | 6.2136 | 800 | 0.1877 | 0.9466 | 0.9450 | 0.9479 | 0.9451 |
0.2523 | 6.2913 | 810 | 0.1875 | 0.9468 | 0.9453 | 0.9483 | 0.9455 |
0.2406 | 6.3689 | 820 | 0.1880 | 0.9495 | 0.9477 | 0.9516 | 0.9481 |
0.2749 | 6.4466 | 830 | 0.1885 | 0.9466 | 0.9448 | 0.9483 | 0.9451 |
0.2702 | 6.5243 | 840 | 0.1885 | 0.9468 | 0.9451 | 0.9482 | 0.9455 |
0.2482 | 6.6019 | 850 | 0.1863 | 0.9475 | 0.9461 | 0.9493 | 0.9464 |
0.2403 | 6.6796 | 860 | 0.1897 | 0.9470 | 0.9451 | 0.9497 | 0.9453 |
0.2509 | 6.7573 | 870 | 0.1906 | 0.9483 | 0.9462 | 0.9508 | 0.9465 |
0.2689 | 6.8350 | 880 | 0.1867 | 0.9485 | 0.9459 | 0.9506 | 0.9466 |
0.2159 | 6.9126 | 890 | 0.1866 | 0.9485 | 0.9464 | 0.9504 | 0.9468 |
0.2488 | 6.9903 | 900 | 0.1866 | 0.9461 | 0.9435 | 0.9482 | 0.9443 |
0.2366 | 7.0680 | 910 | 0.1871 | 0.9448 | 0.9422 | 0.9464 | 0.9430 |
0.2602 | 7.1456 | 920 | 0.1854 | 0.9466 | 0.9441 | 0.9483 | 0.9447 |
0.2236 | 7.2233 | 930 | 0.1859 | 0.9453 | 0.9429 | 0.9467 | 0.9436 |
0.2463 | 7.3010 | 940 | 0.1863 | 0.9470 | 0.9450 | 0.9488 | 0.9454 |
0.2355 | 7.3786 | 950 | 0.1862 | 0.9461 | 0.9437 | 0.9476 | 0.9443 |
0.263 | 7.4563 | 960 | 0.1860 | 0.9473 | 0.9453 | 0.9491 | 0.9456 |
0.2384 | 7.5340 | 970 | 0.1860 | 0.9473 | 0.9453 | 0.9492 | 0.9456 |
0.2229 | 7.6117 | 980 | 0.1856 | 0.9478 | 0.9458 | 0.9497 | 0.9461 |
0.2277 | 7.6893 | 990 | 0.1855 | 0.9480 | 0.9460 | 0.9499 | 0.9463 |
0.2485 | 7.7670 | 1000 | 0.1854 | 0.9480 | 0.9460 | 0.9499 | 0.9463 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for amaye15/microsoft-swinv2-base-patch4-window16-256-batch32-lr0.005-standford-dogs
Base model
microsoft/swinv2-base-patch4-window16-256Evaluation results
- Accuracy on stanford-dogsself-reported0.948
- F1 on stanford-dogsself-reported0.946
- Precision on stanford-dogsself-reported0.950
- Recall on stanford-dogsself-reported0.946