File size: 8,735 Bytes
d375d14 7bc08e2 a507a9f 0edc961 7bc08e2 ca9237b 7bc08e2 d375d14 052d47c 28714cc 408d51a 38cff27 94d07e5 dcbd53b 39a820f 35eb5f4 3d6ab87 0f89580 5e2ffa0 0edc961 465957c c3ce4ee 9983d10 29b446c e30102b a507a9f 260118b 2294858 82620ac b05fffb 77ca9a7 3ebd323 c0c13cd 584a768 9c704ce d9876fc 04cd3d5 617deb3 84942aa c0b0cc6 eb56584 0ae2c1b 8e2b204 dbb6891 bde5765 6ad3533 e162d9a 8228754 4b03e21 249efe4 1ee8106 ca9237b 1e1c4df f304f96 88ee0e1 fcf13f3 9c6a099 7bc08e2 d375d14 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: dwiedarioo/vit-base-patch16-224-in21k-final2multibrainmri
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dwiedarioo/vit-base-patch16-224-in21k-final2multibrainmri
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0072
- Train Accuracy: 1.0
- Train Top-3-accuracy: 1.0
- Validation Loss: 0.1111
- Validation Accuracy: 0.9719
- Validation Top-3-accuracy: 0.9914
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 8200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 2.2742 | 0.3856 | 0.6522 | 1.8596 | 0.6112 | 0.8337 | 0 |
| 1.5673 | 0.6919 | 0.8778 | 1.3120 | 0.7883 | 0.9136 | 1 |
| 1.0377 | 0.8622 | 0.9576 | 0.9078 | 0.8661 | 0.9611 | 2 |
| 0.6816 | 0.9511 | 0.9859 | 0.6497 | 0.9222 | 0.9849 | 3 |
| 0.4698 | 0.9805 | 0.9939 | 0.5104 | 0.9395 | 0.9870 | 4 |
| 0.3375 | 0.9897 | 0.9973 | 0.3975 | 0.9590 | 0.9892 | 5 |
| 0.2554 | 0.9966 | 0.9992 | 0.3107 | 0.9676 | 0.9978 | 6 |
| 0.2346 | 0.9905 | 0.9992 | 0.3804 | 0.9287 | 0.9914 | 7 |
| 0.1976 | 0.9935 | 0.9989 | 0.3250 | 0.9546 | 0.9914 | 8 |
| 0.1686 | 0.9939 | 0.9992 | 0.4980 | 0.8920 | 0.9762 | 9 |
| 0.1423 | 0.9969 | 0.9996 | 0.2129 | 0.9654 | 0.9957 | 10 |
| 0.1073 | 0.9992 | 1.0 | 0.1840 | 0.9741 | 0.9978 | 11 |
| 0.0925 | 0.9992 | 1.0 | 0.1714 | 0.9719 | 0.9978 | 12 |
| 0.0809 | 0.9992 | 1.0 | 0.1595 | 0.9719 | 0.9978 | 13 |
| 0.0715 | 0.9992 | 1.0 | 0.1503 | 0.9719 | 0.9978 | 14 |
| 0.0637 | 1.0 | 1.0 | 0.1426 | 0.9762 | 0.9978 | 15 |
| 0.0573 | 0.9996 | 1.0 | 0.1361 | 0.9784 | 0.9978 | 16 |
| 0.0516 | 1.0 | 1.0 | 0.1325 | 0.9784 | 0.9957 | 17 |
| 0.0469 | 1.0 | 1.0 | 0.1279 | 0.9784 | 0.9957 | 18 |
| 0.0427 | 1.0 | 1.0 | 0.1248 | 0.9784 | 0.9957 | 19 |
| 0.0392 | 1.0 | 1.0 | 0.1224 | 0.9784 | 0.9957 | 20 |
| 0.0359 | 1.0 | 1.0 | 0.1191 | 0.9784 | 0.9957 | 21 |
| 0.0331 | 1.0 | 1.0 | 0.1178 | 0.9762 | 0.9914 | 22 |
| 0.0306 | 1.0 | 1.0 | 0.1162 | 0.9784 | 0.9957 | 23 |
| 0.0284 | 1.0 | 1.0 | 0.1144 | 0.9784 | 0.9957 | 24 |
| 0.0264 | 1.0 | 1.0 | 0.1143 | 0.9741 | 0.9957 | 25 |
| 0.0246 | 1.0 | 1.0 | 0.1126 | 0.9762 | 0.9957 | 26 |
| 0.0230 | 1.0 | 1.0 | 0.1104 | 0.9784 | 0.9957 | 27 |
| 0.0215 | 1.0 | 1.0 | 0.1110 | 0.9762 | 0.9935 | 28 |
| 0.0201 | 1.0 | 1.0 | 0.1091 | 0.9762 | 0.9957 | 29 |
| 0.0189 | 1.0 | 1.0 | 0.1101 | 0.9741 | 0.9957 | 30 |
| 0.0178 | 1.0 | 1.0 | 0.1099 | 0.9762 | 0.9914 | 31 |
| 0.0167 | 1.0 | 1.0 | 0.1091 | 0.9762 | 0.9935 | 32 |
| 0.0158 | 1.0 | 1.0 | 0.1091 | 0.9762 | 0.9914 | 33 |
| 0.0149 | 1.0 | 1.0 | 0.1094 | 0.9741 | 0.9914 | 34 |
| 0.0141 | 1.0 | 1.0 | 0.1088 | 0.9719 | 0.9914 | 35 |
| 0.0134 | 1.0 | 1.0 | 0.1089 | 0.9762 | 0.9914 | 36 |
| 0.0127 | 1.0 | 1.0 | 0.1084 | 0.9741 | 0.9935 | 37 |
| 0.0120 | 1.0 | 1.0 | 0.1087 | 0.9741 | 0.9914 | 38 |
| 0.0114 | 1.0 | 1.0 | 0.1078 | 0.9741 | 0.9914 | 39 |
| 0.0109 | 1.0 | 1.0 | 0.1088 | 0.9719 | 0.9914 | 40 |
| 0.0104 | 1.0 | 1.0 | 0.1087 | 0.9719 | 0.9914 | 41 |
| 0.0099 | 1.0 | 1.0 | 0.1094 | 0.9719 | 0.9935 | 42 |
| 0.0094 | 1.0 | 1.0 | 0.1095 | 0.9719 | 0.9914 | 43 |
| 0.0090 | 1.0 | 1.0 | 0.1099 | 0.9719 | 0.9914 | 44 |
| 0.0086 | 1.0 | 1.0 | 0.1112 | 0.9719 | 0.9914 | 45 |
| 0.0082 | 1.0 | 1.0 | 0.1104 | 0.9719 | 0.9914 | 46 |
| 0.0079 | 1.0 | 1.0 | 0.1107 | 0.9719 | 0.9914 | 47 |
| 0.0075 | 1.0 | 1.0 | 0.1102 | 0.9741 | 0.9914 | 48 |
| 0.0072 | 1.0 | 1.0 | 0.1111 | 0.9719 | 0.9914 | 49 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|