groderg's picture
Upload README.md
a2aedd0 verified
metadata
language:
  - eng
license: cc0-1.0
tags:
  - multilabel-image-classification
  - multilabel
  - generated_from_trainer
base_model: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
model-index:
  - name: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
    results: []

Ziboiai is a fine-tuned version of Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs. It achieves the following results on the test set:

  • Loss: 0.6188
  • F1 Micro: 0.9261
  • F1 Macro: 0.8546
  • Accuracy: 0.1600
  • RMSE: 0.3415
  • MAE: 0.3061
  • R2: -1.5955
Class F1 per class
Acropore_branched 0.8966
Acropore_digitised 0.6301
Acropore_tabular 1.0000
Algae 1.0000
Dead_coral 0.8395
Fish 0.8861
Millepore 1.0000
No_acropore_encrusting 1.0000
No_acropore_massive 0.0000
No_acropore_sub_massive 0.8571
Rock 1.0000
Rubble 1.0000
Sand 1.0000

Model description

Ziboiai is a model built on top of Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.

The source code for training the model can be found in this Git repository.


Intended uses & limitations

You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.


Training and evaluation data

Details on the estimated number of images for each class are given in the following table:

Class train test val Total
Acropore_branched 41 32 33 106
Acropore_digitised 15 14 14 43
Acropore_tabular 5 8 7 20
Algae 50 50 50 150
Dead_coral 25 28 30 83
Fish 34 24 31 89
Millepore 1 0 0 1
No_acropore_encrusting 1 0 0 1
No_acropore_massive 2 5 5 12
No_acropore_sub_massive 27 28 27 82
Rock 45 47 45 137
Rubble 40 45 44 129
Sand 42 46 45 133

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • Number of Epochs: 40.0
  • Learning Rate: 0.001
  • Train Batch Size: 32
  • Eval Batch Size: 32
  • Optimizer: Adam
  • LR Scheduler Type: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
  • Freeze Encoder: Yes
  • Data Augmentation: Yes

Data Augmentation

Data were augmented using the following transformations :

Train Transforms

  • PreProcess: No additional parameters
  • Resize: probability=1.00
  • RandomHorizontalFlip: probability=0.25
  • RandomVerticalFlip: probability=0.25
  • ColorJiggle: probability=0.25
  • RandomPerspective: probability=0.25
  • Normalize: probability=1.00

Val Transforms

  • PreProcess: No additional parameters
  • Resize: probability=1.00
  • Normalize: probability=1.00

Training results

Epoch Validation Loss MAE RMSE R2 Learning Rate
1 0.7150455713272095 0.3848940134048462 0.40997111797332764 -20.29086685180664 0.001
2 0.7314126491546631 0.3895121216773987 0.4163060486316681 -21.218204498291016 0.001
3 0.7726277112960815 0.40413352847099304 0.4320966601371765 -24.822391510009766 0.001
4 0.7917326092720032 0.4094983637332916 0.4379725754261017 -26.581586837768555 0.001
5 0.7852649092674255 0.402120441198349 0.43184274435043335 -26.95589256286621 0.001
6 0.7647674679756165 0.3905399441719055 0.42244094610214233 -24.40153694152832 0.001
7 0.7391812205314636 0.376028835773468 0.41028541326522827 -22.557889938354492 0.001
8 0.7115270495414734 0.36385056376457214 0.39825379848480225 -20.067392349243164 0.0001
9 0.6896975040435791 0.35347798466682434 0.3878582715988159 -18.16646385192871 0.0001
10 0.6777035593986511 0.34683120250701904 0.3818005323410034 -16.94469451904297 0.0001
11 0.6701759099960327 0.3423532247543335 0.3779585659503937 -16.037521362304688 0.0001
12 0.663905918598175 0.3388546407222748 0.37438222765922546 -15.605177879333496 0.0001
13 0.656491219997406 0.3345881700515747 0.3702985942363739 -14.805088996887207 0.0001
14 0.6501385569572449 0.33100754022598267 0.3668138384819031 -14.231175422668457 0.0001
15 0.6467865705490112 0.32885220646858215 0.36475783586502075 -14.07986831665039 0.0001
16 0.6471170783042908 0.3288896679878235 0.3650059998035431 -14.255745887756348 0.0001
17 0.6435126662254333 0.3268200755119324 0.36310678720474243 -14.059813499450684 0.0001
18 0.6437923908233643 0.3269612491130829 0.36342939734458923 -14.036934852600098 0.0001
19 0.6399621367454529 0.3249860107898712 0.36136963963508606 -13.81522274017334 0.0001
20 0.6391971707344055 0.3246455192565918 0.3608955144882202 -13.710391998291016 0.0001
21 0.6386714577674866 0.32462170720100403 0.3606450855731964 -13.809860229492188 0.0001
22 0.6388444304466248 0.3243348002433777 0.36056435108184814 -13.849721908569336 0.0001
23 0.6361631155014038 0.3227779269218445 0.35895633697509766 -13.562189102172852 0.0001
24 0.635435163974762 0.3223152160644531 0.35847193002700806 -13.645319938659668 0.0001
25 0.6344550848007202 0.32144099473953247 0.35783687233924866 -13.602314949035645 0.0001
26 0.6348865628242493 0.3211889863014221 0.3580625355243683 -13.630416870117188 0.0001
27 0.6332749724388123 0.32009246945381165 0.3570806384086609 -13.561347007751465 0.0001
28 0.6295092701911926 0.31767499446868896 0.35479238629341125 -13.23308277130127 0.0001
29 0.6285346746444702 0.3173280954360962 0.35434553027153015 -13.162256240844727 0.0001
30 0.6263097524642944 0.31627562642097473 0.3532228171825409 -12.713174819946289 0.0001
31 0.6272528767585754 0.316723495721817 0.35376670956611633 -12.873921394348145 0.0001
32 0.6294133067131042 0.31807586550712585 0.3550169765949249 -12.935453414916992 0.0001
33 0.6299176216125488 0.3185364603996277 0.35538923740386963 -12.93520736694336 0.0001
34 0.6320692300796509 0.3193182349205017 0.35644862055778503 -13.267191886901855 0.0001
35 0.6279481649398804 0.31752488017082214 0.3541102707386017 -12.99951171875 0.0001
36 0.6280075907707214 0.31736499071121216 0.35407301783561707 -13.00741195678711 0.0001
37 0.6303659081459045 0.3187006115913391 0.35543760657310486 -13.230977058410645 1e-05
38 0.6297122836112976 0.31833118200302124 0.3550592064857483 -12.983016967773438 1e-05
39 0.630845308303833 0.3193325996398926 0.35580796003341675 -13.159842491149902 1e-05
40 0.6291573643684387 0.3182610869407654 0.3547934889793396 -13.069788932800293 1e-05

Framework Versions

  • Transformers: 4.44.2
  • Pytorch: 2.4.1+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1