amiqinayat's picture
Model save
f25c87a
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-vit
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8084291187739464
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-vit
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5493
- Crack: {'precision': 0.5735294117647058, 'recall': 0.6964285714285714, 'f1-score': 0.6290322580645161, 'support': 56}
- Environment - ground: {'precision': 0.9722222222222222, 'recall': 0.9722222222222222, 'f1-score': 0.9722222222222222, 'support': 36}
- Environment - other: {'precision': 0.8913043478260869, 'recall': 0.9318181818181818, 'f1-score': 0.9111111111111111, 'support': 44}
- Environment - sky: {'precision': 0.8974358974358975, 'recall': 0.9722222222222222, 'f1-score': 0.9333333333333333, 'support': 36}
- Environment - vegetation: {'precision': 0.9622641509433962, 'recall': 0.9622641509433962, 'f1-score': 0.9622641509433962, 'support': 53}
- Joint defect: {'precision': 0.6470588235294118, 'recall': 0.7857142857142857, 'f1-score': 0.7096774193548386, 'support': 28}
- Loss of section: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3}
- Spalling: {'precision': 0.5161290322580645, 'recall': 0.41025641025641024, 'f1-score': 0.4571428571428572, 'support': 39}
- Vegetation: {'precision': 0.8387096774193549, 'recall': 0.8813559322033898, 'f1-score': 0.859504132231405, 'support': 59}
- Wall - grafitti: {'precision': 0.96, 'recall': 0.96, 'f1-score': 0.96, 'support': 25}
- Wall - normal: {'precision': 0.7575757575757576, 'recall': 0.625, 'f1-score': 0.6849315068493151, 'support': 40}
- Wall - other: {'precision': 0.8888888888888888, 'recall': 0.8333333333333334, 'f1-score': 0.8602150537634408, 'support': 48}
- Wall - stain: {'precision': 0.84, 'recall': 0.7636363636363637, 'f1-score': 0.8000000000000002, 'support': 55}
- Accuracy: 0.8084
- Macro avg: {'precision': 0.7496244776818297, 'recall': 0.753403974906029, 'f1-score': 0.7491872342320336, 'support': 522}
- Weighted avg: {'precision': 0.805637888745576, 'recall': 0.8084291187739464, 'f1-score': 0.8046217646882747, 'support': 522}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Crack | Environment - ground | Environment - other | Environment - sky | Environment - vegetation | Joint defect | Loss of section | Spalling | Vegetation | Wall - grafitti | Wall - normal | Wall - other | Wall - stain | Accuracy | Macro avg | Weighted avg |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------:|:---------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------:|
| 0.8877 | 0.99 | 146 | 0.8076 | {'precision': 0.5294117647058824, 'recall': 0.6428571428571429, 'f1-score': 0.5806451612903226, 'support': 56} | {'precision': 0.9459459459459459, 'recall': 0.9722222222222222, 'f1-score': 0.9589041095890412, 'support': 36} | {'precision': 0.7916666666666666, 'recall': 0.8636363636363636, 'f1-score': 0.8260869565217391, 'support': 44} | {'precision': 0.8780487804878049, 'recall': 1.0, 'f1-score': 0.9350649350649352, 'support': 36} | {'precision': 0.9807692307692307, 'recall': 0.9622641509433962, 'f1-score': 0.9714285714285713, 'support': 53} | {'precision': 0.7037037037037037, 'recall': 0.6785714285714286, 'f1-score': 0.6909090909090909, 'support': 28} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3} | {'precision': 0.5588235294117647, 'recall': 0.48717948717948717, 'f1-score': 0.5205479452054794, 'support': 39} | {'precision': 0.6625, 'recall': 0.8983050847457628, 'f1-score': 0.762589928057554, 'support': 59} | {'precision': 0.7666666666666667, 'recall': 0.92, 'f1-score': 0.8363636363636363, 'support': 25} | {'precision': 0.9411764705882353, 'recall': 0.4, 'f1-score': 0.5614035087719298, 'support': 40} | {'precision': 0.8780487804878049, 'recall': 0.75, 'f1-score': 0.8089887640449439, 'support': 48} | {'precision': 0.7659574468085106, 'recall': 0.6545454545454545, 'f1-score': 0.7058823529411765, 'support': 55} | 0.7625 | {'precision': 0.7232860758647859, 'recall': 0.7099677949770199, 'f1-score': 0.7045242277068015, 'support': 522} | {'precision': 0.7735594241725829, 'recall': 0.7624521072796935, 'f1-score': 0.7551578669008161, 'support': 522} |
| 0.8113 | 2.0 | 293 | 0.6101 | {'precision': 0.5555555555555556, 'recall': 0.7142857142857143, 'f1-score': 0.6250000000000001, 'support': 56} | {'precision': 0.9714285714285714, 'recall': 0.9444444444444444, 'f1-score': 0.9577464788732395, 'support': 36} | {'precision': 0.8888888888888888, 'recall': 0.9090909090909091, 'f1-score': 0.8988764044943819, 'support': 44} | {'precision': 0.8974358974358975, 'recall': 0.9722222222222222, 'f1-score': 0.9333333333333333, 'support': 36} | {'precision': 0.9622641509433962, 'recall': 0.9622641509433962, 'f1-score': 0.9622641509433962, 'support': 53} | {'precision': 0.7857142857142857, 'recall': 0.7857142857142857, 'f1-score': 0.7857142857142857, 'support': 28} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3} | {'precision': 0.4117647058823529, 'recall': 0.5384615384615384, 'f1-score': 0.4666666666666667, 'support': 39} | {'precision': 0.8153846153846154, 'recall': 0.8983050847457628, 'f1-score': 0.8548387096774194, 'support': 59} | {'precision': 0.7741935483870968, 'recall': 0.96, 'f1-score': 0.8571428571428571, 'support': 25} | {'precision': 0.75, 'recall': 0.45, 'f1-score': 0.5625000000000001, 'support': 40} | {'precision': 0.9024390243902439, 'recall': 0.7708333333333334, 'f1-score': 0.8314606741573034, 'support': 48} | {'precision': 0.8157894736842105, 'recall': 0.5636363636363636, 'f1-score': 0.6666666666666666, 'support': 55} | 0.7778 | {'precision': 0.7331429782842396, 'recall': 0.7284044651444593, 'f1-score': 0.7232469405899653, 'support': 522} | {'precision': 0.7896708656541912, 'recall': 0.7777777777777778, 'f1-score': 0.7754219719596664, 'support': 522} |
| 0.6069 | 2.98 | 438 | 0.5493 | {'precision': 0.5735294117647058, 'recall': 0.6964285714285714, 'f1-score': 0.6290322580645161, 'support': 56} | {'precision': 0.9722222222222222, 'recall': 0.9722222222222222, 'f1-score': 0.9722222222222222, 'support': 36} | {'precision': 0.8913043478260869, 'recall': 0.9318181818181818, 'f1-score': 0.9111111111111111, 'support': 44} | {'precision': 0.8974358974358975, 'recall': 0.9722222222222222, 'f1-score': 0.9333333333333333, 'support': 36} | {'precision': 0.9622641509433962, 'recall': 0.9622641509433962, 'f1-score': 0.9622641509433962, 'support': 53} | {'precision': 0.6470588235294118, 'recall': 0.7857142857142857, 'f1-score': 0.7096774193548386, 'support': 28} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3} | {'precision': 0.5161290322580645, 'recall': 0.41025641025641024, 'f1-score': 0.4571428571428572, 'support': 39} | {'precision': 0.8387096774193549, 'recall': 0.8813559322033898, 'f1-score': 0.859504132231405, 'support': 59} | {'precision': 0.96, 'recall': 0.96, 'f1-score': 0.96, 'support': 25} | {'precision': 0.7575757575757576, 'recall': 0.625, 'f1-score': 0.6849315068493151, 'support': 40} | {'precision': 0.8888888888888888, 'recall': 0.8333333333333334, 'f1-score': 0.8602150537634408, 'support': 48} | {'precision': 0.84, 'recall': 0.7636363636363637, 'f1-score': 0.8000000000000002, 'support': 55} | 0.8084 | {'precision': 0.7496244776818297, 'recall': 0.753403974906029, 'f1-score': 0.7491872342320336, 'support': 522} | {'precision': 0.805637888745576, 'recall': 0.8084291187739464, 'f1-score': 0.8046217646882747, 'support': 522} |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3