File size: 2,312 Bytes
8fcb38c 948237a 05510b7 e74ad6b 05510b7 292d9c7 8fcb38c 05510b7 292d9c7 05510b7 292d9c7 05510b7 292d9c7 8fcb38c 05510b7 8fcb38c 8d3a4a2 05510b7 8fcb38c 05510b7 8fcb38c 05510b7 8fcb38c 05510b7 8fcb38c 05510b7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- AI-Lab-Makerere/beans
metrics:
- accuracy
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: vit-base-beans
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- type: accuracy
value: 0.9849624060150376
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# THIS IS A TEST REPO FOR DEBUGGING!
This repo is here as a result of playing with and debugging training scripts and push to hub features. As such, the TesnorFlow and PyTorch models will be out of sync and different weights may be push at any time, including pushing models with very low performance.
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0630
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3038 | 1.0 | 130 | 0.2396 | 0.9624 |
| 0.1609 | 2.0 | 260 | 0.1130 | 0.9774 |
| 0.2313 | 3.0 | 390 | 0.0809 | 0.9850 |
| 0.1436 | 4.0 | 520 | 0.0738 | 0.9850 |
| 0.1086 | 5.0 | 650 | 0.0630 | 0.9850 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.14.0.dev20221118
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|