hemakumari commited on
Commit
11e2def
·
verified ·
1 Parent(s): 88414a1

Model save

Browse files
Files changed (2) hide show
  1. README.md +14 -14
  2. model.safetensors +1 -1
README.md CHANGED
@@ -4,7 +4,7 @@ base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - image_folder
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,15 +14,15 @@ model-index:
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
- name: image_folder
18
- type: image_folder
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.28735632183908044
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # vit-base-patch16-224-in21k-finetune
32
 
33
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.5921
36
- - Accuracy: 0.2874
37
 
38
  ## Model description
39
 
@@ -67,16 +67,16 @@ The following hyperparameters were used during training:
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | No log | 0.92 | 3 | 1.6042 | 0.2414 |
71
- | No log | 1.85 | 6 | 1.6004 | 0.2414 |
72
- | No log | 2.77 | 9 | 1.5964 | 0.2759 |
73
- | 1.4057 | 4.0 | 13 | 1.5928 | 0.2874 |
74
- | 1.4057 | 4.62 | 15 | 1.5921 | 0.2874 |
75
 
76
 
77
  ### Framework versions
78
 
79
- - Transformers 4.38.1
80
  - Pytorch 2.1.2
81
- - Datasets 2.1.0
82
  - Tokenizers 0.15.2
 
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
+ - imagefolder
8
  metrics:
9
  - accuracy
10
  model-index:
 
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
+ name: imagefolder
18
+ type: imagefolder
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.3103448275862069
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # vit-base-patch16-224-in21k-finetune
32
 
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.5597
36
+ - Accuracy: 0.3103
37
 
38
  ## Model description
39
 
 
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | No log | 0.92 | 3 | 1.5697 | 0.2874 |
71
+ | No log | 1.85 | 6 | 1.5657 | 0.2759 |
72
+ | No log | 2.77 | 9 | 1.5628 | 0.2759 |
73
+ | 1.5842 | 4.0 | 13 | 1.5602 | 0.3103 |
74
+ | 1.5842 | 4.62 | 15 | 1.5597 | 0.3103 |
75
 
76
 
77
  ### Framework versions
78
 
79
+ - Transformers 4.39.3
80
  - Pytorch 2.1.2
81
+ - Datasets 2.18.0
82
  - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4ac972eb8a267f9f10b2c01eabdb693dd5b32ed8247677c85901b1c5ff97c01e
3
  size 343233204
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c77817f14fbfaa3ef34879af32a121f771c75702fc8044de7063037e04c8717
3
  size 343233204