dwiedarioo's picture
Training in progress epoch 34
8e2b204
|
raw
history blame
6.77 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_keras_callback
model-index:
  - name: dwiedarioo/vit-base-patch16-224-in21k-final2multibrainmri
    results: []

dwiedarioo/vit-base-patch16-224-in21k-final2multibrainmri

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0149
  • Train Accuracy: 1.0
  • Train Top-3-accuracy: 1.0
  • Validation Loss: 0.1094
  • Validation Accuracy: 0.9741
  • Validation Top-3-accuracy: 0.9914
  • Epoch: 34

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 8200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
  • training_precision: mixed_float16

Training results

Train Loss Train Accuracy Train Top-3-accuracy Validation Loss Validation Accuracy Validation Top-3-accuracy Epoch
2.2742 0.3856 0.6522 1.8596 0.6112 0.8337 0
1.5673 0.6919 0.8778 1.3120 0.7883 0.9136 1
1.0377 0.8622 0.9576 0.9078 0.8661 0.9611 2
0.6816 0.9511 0.9859 0.6497 0.9222 0.9849 3
0.4698 0.9805 0.9939 0.5104 0.9395 0.9870 4
0.3375 0.9897 0.9973 0.3975 0.9590 0.9892 5
0.2554 0.9966 0.9992 0.3107 0.9676 0.9978 6
0.2346 0.9905 0.9992 0.3804 0.9287 0.9914 7
0.1976 0.9935 0.9989 0.3250 0.9546 0.9914 8
0.1686 0.9939 0.9992 0.4980 0.8920 0.9762 9
0.1423 0.9969 0.9996 0.2129 0.9654 0.9957 10
0.1073 0.9992 1.0 0.1840 0.9741 0.9978 11
0.0925 0.9992 1.0 0.1714 0.9719 0.9978 12
0.0809 0.9992 1.0 0.1595 0.9719 0.9978 13
0.0715 0.9992 1.0 0.1503 0.9719 0.9978 14
0.0637 1.0 1.0 0.1426 0.9762 0.9978 15
0.0573 0.9996 1.0 0.1361 0.9784 0.9978 16
0.0516 1.0 1.0 0.1325 0.9784 0.9957 17
0.0469 1.0 1.0 0.1279 0.9784 0.9957 18
0.0427 1.0 1.0 0.1248 0.9784 0.9957 19
0.0392 1.0 1.0 0.1224 0.9784 0.9957 20
0.0359 1.0 1.0 0.1191 0.9784 0.9957 21
0.0331 1.0 1.0 0.1178 0.9762 0.9914 22
0.0306 1.0 1.0 0.1162 0.9784 0.9957 23
0.0284 1.0 1.0 0.1144 0.9784 0.9957 24
0.0264 1.0 1.0 0.1143 0.9741 0.9957 25
0.0246 1.0 1.0 0.1126 0.9762 0.9957 26
0.0230 1.0 1.0 0.1104 0.9784 0.9957 27
0.0215 1.0 1.0 0.1110 0.9762 0.9935 28
0.0201 1.0 1.0 0.1091 0.9762 0.9957 29
0.0189 1.0 1.0 0.1101 0.9741 0.9957 30
0.0178 1.0 1.0 0.1099 0.9762 0.9914 31
0.0167 1.0 1.0 0.1091 0.9762 0.9935 32
0.0158 1.0 1.0 0.1091 0.9762 0.9914 33
0.0149 1.0 1.0 0.1094 0.9741 0.9914 34

Framework versions

  • Transformers 4.35.0
  • TensorFlow 2.14.0
  • Datasets 2.14.6
  • Tokenizers 0.14.1