NabeelShar commited on
Commit
28c5c1e
1 Parent(s): e26f0b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -93
README.md CHANGED
@@ -1,99 +1,7 @@
1
  ---
2
  license: apache-2.0
3
- tags:
4
- - generated_from_trainer
5
- datasets:
6
- - imagefolder
7
- metrics:
8
- - accuracy
9
- - precision
10
- - recall
11
- - f1
12
- model-index:
13
- - name: emotion-dectect
14
- results:
15
- - task:
16
- name: Image Classification
17
- type: image-classification
18
- dataset:
19
- name: imagefolder
20
- type: imagefolder
21
- config: default
22
- split: train
23
- args: default
24
- metrics:
25
- - name: Accuracy
26
- type: accuracy
27
- value: 0.8807339449541285
28
- - name: Precision
29
- type: precision
30
- value: 0.8768597487153273
31
- - name: Recall
32
- type: recall
33
- value: 0.8807339449541285
34
- - name: F1
35
- type: f1
36
- value: 0.8782945902988435
37
- ---
38
-
39
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
40
- should probably proofread and complete it, then remove this comment. -->
41
-
42
- # google-vit-base-patch16-224-cartoon-emotion-detection
43
-
44
- This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
45
- It achieves the following results on the evaluation set:
46
- - Loss: 0.3706
47
- - Accuracy: 0.8807
48
- - Precision: 0.8769
49
- - Recall: 0.8807
50
- - F1: 0.8783
51
-
52
- ## Model description
53
-
54
- More information needed
55
-
56
- ## Intended uses & limitations
57
-
58
- More information needed
59
-
60
- ## Training and evaluation data
61
-
62
- More information needed
63
-
64
- ## Training procedure
65
-
66
- ### Training hyperparameters
67
-
68
- The following hyperparameters were used during training:
69
- - learning_rate: 0.00012
70
- - train_batch_size: 64
71
- - eval_batch_size: 64
72
- - seed: 42
73
- - gradient_accumulation_steps: 4
74
- - total_train_batch_size: 256
75
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
76
- - lr_scheduler_type: linear
77
- - lr_scheduler_warmup_ratio: 0.1
78
- - num_epochs: 10
79
-
80
- ### Training results
81
-
82
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
83
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
84
- | No log | 0.97 | 8 | 0.9902 | 0.5596 | 0.5506 | 0.5596 | 0.5360 |
85
- | 1.242 | 1.97 | 16 | 0.5157 | 0.8165 | 0.8195 | 0.8165 | 0.8132 |
86
- | 0.4438 | 2.97 | 24 | 0.3871 | 0.8440 | 0.8516 | 0.8440 | 0.8446 |
87
- | 0.1768 | 3.97 | 32 | 0.3531 | 0.8624 | 0.8653 | 0.8624 | 0.8585 |
88
- | 0.0661 | 4.97 | 40 | 0.3780 | 0.8716 | 0.8693 | 0.8716 | 0.8674 |
89
- | 0.0661 | 5.97 | 48 | 0.3747 | 0.8624 | 0.8649 | 0.8624 | 0.8632 |
90
- | 0.0375 | 6.97 | 56 | 0.3760 | 0.8991 | 0.8961 | 0.8991 | 0.8971 |
91
- | 0.0362 | 7.97 | 64 | 0.4092 | 0.8716 | 0.8684 | 0.8716 | 0.8681 |
92
- | 0.0322 | 8.97 | 72 | 0.3499 | 0.8899 | 0.8880 | 0.8899 | 0.8888 |
93
- | 0.029 | 9.97 | 80 | 0.3706 | 0.8807 | 0.8769 | 0.8807 | 0.8783 |
94
-
95
 
96
- ### Framework versions
97
 
98
  - Transformers 4.25.1
99
  - Pytorch 1.13.1+cu117
 
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
 
4
+ #versions
5
 
6
  - Transformers 4.25.1
7
  - Pytorch 1.13.1+cu117