sbottazziunsam's picture
update model card README.md
6fa4a21
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - f1
model-index:
  - name: 9-classifier-finetuned-padchest
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: F1
            type: f1
            value: 0.9562502564102563

9-classifier-finetuned-padchest

This model is a fine-tuned version of nickmuchi/vit-finetuned-chest-xray-pneumonia on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1585
  • F1: 0.9563

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss F1
2.5332 1.0 18 0.5695 0.7920
0.4488 2.0 36 0.3419 0.7934
0.3259 3.0 54 0.2451 0.7934
0.2795 4.0 72 0.1954 0.9443
0.2348 5.0 90 0.1698 0.9343
0.1937 6.0 108 0.1829 0.9297
0.1851 7.0 126 0.1484 0.9454
0.1925 8.0 144 0.1330 0.9545
0.1614 9.0 162 0.1403 0.9387
0.1734 10.0 180 0.1221 0.9531
0.1697 11.0 198 0.1142 0.9524
0.1824 12.0 216 0.1129 0.9586
0.1336 13.0 234 0.1369 0.9441
0.1596 14.0 252 0.1181 0.9540
0.1474 15.0 270 0.1116 0.9646
0.1256 16.0 288 0.1035 0.9598
0.1398 17.0 306 0.1195 0.9519
0.1219 18.0 324 0.1123 0.9588
0.1114 19.0 342 0.1126 0.9586
0.1089 20.0 360 0.1083 0.9584
0.1123 21.0 378 0.1038 0.9554
0.1241 22.0 396 0.0927 0.9657
0.099 23.0 414 0.1397 0.9559
0.1025 24.0 432 0.1201 0.9584
0.1088 25.0 450 0.0894 0.9627
0.0953 26.0 468 0.1083 0.9632
0.0953 27.0 486 0.1061 0.9592
0.0831 28.0 504 0.1129 0.9570
0.0836 29.0 522 0.1123 0.9598
0.0705 30.0 540 0.1611 0.9499
0.1047 31.0 558 0.1191 0.9570
0.0803 32.0 576 0.1440 0.9563
0.0852 33.0 594 0.1149 0.9541
0.0588 34.0 612 0.1830 0.9489
0.0701 35.0 630 0.1475 0.9592
0.0607 36.0 648 0.1350 0.9627
0.0749 37.0 666 0.1389 0.9563
0.073 38.0 684 0.1463 0.9559
0.0579 39.0 702 0.1289 0.9595
0.0757 40.0 720 0.1585 0.9584
0.0538 41.0 738 0.1565 0.9588
0.0461 42.0 756 0.1630 0.9559
0.072 43.0 774 0.1704 0.9554
0.0517 44.0 792 0.1657 0.9559
0.0524 45.0 810 0.1358 0.9570
0.0569 46.0 828 0.1538 0.9533
0.0506 47.0 846 0.1579 0.9588
0.0506 48.0 864 0.1505 0.9566
0.0538 49.0 882 0.1593 0.9588
0.0532 50.0 900 0.1585 0.9563

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 2.0.0+cu117
  • Datasets 2.18.0
  • Tokenizers 0.13.3