unreal-hug's picture
End of training
89d7e12
---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-ECHO-dev-05-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-ECHO-dev-05-v1
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the unreal-hug/REAL_DATASET_SEG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4592
- Mean Iou: 0.3826
- Mean Accuracy: 0.5892
- Overall Accuracy: 0.5467
- Accuracy Unlabeled: nan
- Accuracy Lv: 0.7143
- Accuracy Lr: 0.4323
- Accuracy Ra: 0.7629
- Accuracy La: 0.4472
- Iou Unlabeled: 0.0
- Iou Lv: 0.7065
- Iou Lr: 0.4317
- Iou Ra: 0.4223
- Iou La: 0.3527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Lv | Accuracy Lr | Accuracy Ra | Accuracy La | Iou Unlabeled | Iou Lv | Iou Lr | Iou Ra | Iou La |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-------------:|:------:|:------:|:------:|:------:|
| 1.1252 | 2.86 | 20 | 1.3259 | 0.1971 | 0.3379 | 0.4375 | nan | 0.0 | 0.6365 | 0.5291 | 0.1860 | 0.0 | 0.0 | 0.4923 | 0.3492 | 0.1439 |
| 0.9104 | 5.71 | 40 | 0.9589 | 0.1818 | 0.3421 | 0.3596 | nan | 0.0145 | 0.3590 | 0.7644 | 0.2304 | 0.0 | 0.0144 | 0.3436 | 0.3778 | 0.1731 |
| 0.7567 | 8.57 | 60 | 0.7761 | 0.2203 | 0.3739 | 0.3852 | nan | 0.0808 | 0.3882 | 0.6422 | 0.3844 | 0.0 | 0.0803 | 0.3778 | 0.4073 | 0.2360 |
| 0.7035 | 11.43 | 80 | 0.7442 | 0.2729 | 0.4718 | 0.4941 | nan | 0.2145 | 0.5077 | 0.8370 | 0.3279 | 0.0 | 0.2134 | 0.4817 | 0.4073 | 0.2619 |
| 0.5781 | 14.29 | 100 | 0.6260 | 0.2876 | 0.4446 | 0.4279 | nan | 0.4235 | 0.3777 | 0.5787 | 0.3986 | 0.0 | 0.3683 | 0.3761 | 0.4063 | 0.2873 |
| 0.5438 | 17.14 | 120 | 0.5559 | 0.3877 | 0.5412 | 0.5761 | nan | 0.5803 | 0.6504 | 0.5190 | 0.4149 | 0.0 | 0.5671 | 0.6193 | 0.4171 | 0.3352 |
| 0.5198 | 20.0 | 140 | 0.5617 | 0.3724 | 0.5617 | 0.5335 | nan | 0.6661 | 0.4532 | 0.7059 | 0.4216 | 0.0 | 0.6419 | 0.4532 | 0.4129 | 0.3540 |
| 0.4435 | 22.86 | 160 | 0.5393 | 0.4160 | 0.6198 | 0.6126 | nan | 0.7555 | 0.5832 | 0.6962 | 0.4442 | 0.0 | 0.7000 | 0.5705 | 0.4873 | 0.3221 |
| 0.5002 | 25.71 | 180 | 0.5126 | 0.4094 | 0.6080 | 0.6043 | nan | 0.6854 | 0.5833 | 0.6945 | 0.4687 | 0.0 | 0.6771 | 0.5761 | 0.4762 | 0.3176 |
| 0.4142 | 28.57 | 200 | 0.4874 | 0.3503 | 0.5361 | 0.4949 | nan | 0.6967 | 0.3895 | 0.6436 | 0.4147 | 0.0 | 0.6287 | 0.3895 | 0.4106 | 0.3228 |
| 0.3092 | 31.43 | 220 | 0.4819 | 0.3857 | 0.6001 | 0.5534 | nan | 0.7296 | 0.4267 | 0.8020 | 0.4423 | 0.0 | 0.7157 | 0.4267 | 0.4266 | 0.3595 |
| 0.2895 | 34.29 | 240 | 0.4969 | 0.3983 | 0.6220 | 0.5809 | nan | 0.7353 | 0.4689 | 0.8050 | 0.4787 | 0.0 | 0.7265 | 0.4677 | 0.4474 | 0.3498 |
| 0.3046 | 37.14 | 260 | 0.4767 | 0.4248 | 0.6412 | 0.6115 | nan | 0.7853 | 0.5270 | 0.7814 | 0.4711 | 0.0 | 0.7712 | 0.5199 | 0.4587 | 0.3742 |
| 0.3514 | 40.0 | 280 | 0.4531 | 0.3978 | 0.5989 | 0.5767 | nan | 0.7112 | 0.5082 | 0.7478 | 0.4282 | 0.0 | 0.6979 | 0.5024 | 0.4353 | 0.3537 |
| 0.2891 | 42.86 | 300 | 0.4629 | 0.3842 | 0.5885 | 0.5488 | nan | 0.7046 | 0.4397 | 0.7693 | 0.4403 | 0.0 | 0.6982 | 0.4366 | 0.4237 | 0.3623 |
| 0.2512 | 45.71 | 320 | 0.4584 | 0.3783 | 0.5794 | 0.5357 | nan | 0.7144 | 0.4199 | 0.7390 | 0.4443 | 0.0 | 0.7016 | 0.4196 | 0.4134 | 0.3568 |
| 0.2695 | 48.57 | 340 | 0.4592 | 0.3826 | 0.5892 | 0.5467 | nan | 0.7143 | 0.4323 | 0.7629 | 0.4472 | 0.0 | 0.7065 | 0.4317 | 0.4223 | 0.3527 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0