Spalne commited on
Commit
ea2591f
1 Parent(s): 02ad327

Model save

Browse files
Files changed (1) hide show
  1. README.md +2 -12
README.md CHANGED
@@ -3,11 +3,7 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: google/vit-base-patch16-224-in21k
5
  tags:
6
- - image-classification
7
- - vision
8
  - generated_from_trainer
9
- metrics:
10
- - accuracy
11
  model-index:
12
  - name: vit-base-patch16-224-in21k
13
  results: []
@@ -18,13 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # vit-base-patch16-224-in21k
20
 
21
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the chainyo/rvl-cdip dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 0.4218
24
- - Accuracy: 0.8789
25
- - Memory Allocated (gb): 1.47
26
- - Max Memory Allocated (gb): 2.09
27
- - Total Memory Available (gb): 94.62
28
 
29
  ## Model description
30
 
@@ -59,5 +49,5 @@ The following hyperparameters were used during training:
59
 
60
  - Transformers 4.45.2
61
  - Pytorch 2.4.0a0+git74cd574
62
- - Datasets 3.0.2
63
  - Tokenizers 0.20.1
 
3
  license: apache-2.0
4
  base_model: google/vit-base-patch16-224-in21k
5
  tags:
 
 
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: vit-base-patch16-224-in21k
9
  results: []
 
14
 
15
  # vit-base-patch16-224-in21k
16
 
17
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
 
 
 
 
 
 
18
 
19
  ## Model description
20
 
 
49
 
50
  - Transformers 4.45.2
51
  - Pytorch 2.4.0a0+git74cd574
52
+ - Datasets 3.1.0
53
  - Tokenizers 0.20.1