tarekziade
commited on
Commit
•
5d76eb5
1
Parent(s):
592c106
Update README.md
Browse files
README.md
CHANGED
@@ -15,13 +15,6 @@ widget:
|
|
15 |
example_title: Savanna
|
16 |
---
|
17 |
|
18 |
-
# CO2
|
19 |
-
|
20 |
-
This model was trained on an M1 and took 0.322 g of CO2 (measured with [CodeCarbon](https://codecarbon.io/))
|
21 |
-
|
22 |
-
|
23 |
-
# Results
|
24 |
-
|
25 |
This model is a fine-tuned version of [facebook/deit-tiny-distilled-patch16-224](https://huggingface.co/facebook/deit-tiny-distilled-patch16-224) on the [docornot](https://huggingface.co/datasets/tarekziade/docornot) dataset.
|
26 |
|
27 |
It achieves the following results on the evaluation set:
|
@@ -29,22 +22,24 @@ It achieves the following results on the evaluation set:
|
|
29 |
- Accuracy: 1.0
|
30 |
|
31 |
|
|
|
32 |
|
|
|
33 |
|
34 |
-
|
35 |
|
36 |
This model is distilled Vision Transformer (ViT) model.
|
37 |
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
|
38 |
|
39 |
-
|
40 |
|
41 |
You can use this model to detect if an image is a picture or a document.
|
42 |
|
43 |
-
|
44 |
|
45 |
Source code used to generate this model : https://github.com/tarekziade/docornot
|
46 |
|
47 |
-
|
48 |
|
49 |
The following hyperparameters were used during training:
|
50 |
- learning_rate: 5e-05
|
@@ -55,14 +50,14 @@ The following hyperparameters were used during training:
|
|
55 |
- lr_scheduler_type: linear
|
56 |
- num_epochs: 1
|
57 |
|
58 |
-
|
59 |
|
60 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
61 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
62 |
| 0.0 | 1.0 | 1600 | 0.0000 | 1.0 |
|
63 |
|
64 |
|
65 |
-
|
66 |
|
67 |
- Transformers 4.39.2
|
68 |
- Pytorch 2.2.2
|
|
|
15 |
example_title: Savanna
|
16 |
---
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
This model is a fine-tuned version of [facebook/deit-tiny-distilled-patch16-224](https://huggingface.co/facebook/deit-tiny-distilled-patch16-224) on the [docornot](https://huggingface.co/datasets/tarekziade/docornot) dataset.
|
19 |
|
20 |
It achieves the following results on the evaluation set:
|
|
|
22 |
- Accuracy: 1.0
|
23 |
|
24 |
|
25 |
+
# CO2 emissions
|
26 |
|
27 |
+
This model was trained on an M1 and took 0.322 g of CO2 (measured with [CodeCarbon](https://codecarbon.io/))
|
28 |
|
29 |
+
# Model description
|
30 |
|
31 |
This model is distilled Vision Transformer (ViT) model.
|
32 |
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
|
33 |
|
34 |
+
# Intended uses & limitations
|
35 |
|
36 |
You can use this model to detect if an image is a picture or a document.
|
37 |
|
38 |
+
# Training procedure
|
39 |
|
40 |
Source code used to generate this model : https://github.com/tarekziade/docornot
|
41 |
|
42 |
+
## Training hyperparameters
|
43 |
|
44 |
The following hyperparameters were used during training:
|
45 |
- learning_rate: 5e-05
|
|
|
50 |
- lr_scheduler_type: linear
|
51 |
- num_epochs: 1
|
52 |
|
53 |
+
## Training results
|
54 |
|
55 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
56 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
57 |
| 0.0 | 1.0 | 1600 | 0.0000 | 1.0 |
|
58 |
|
59 |
|
60 |
+
## Framework versions
|
61 |
|
62 |
- Transformers 4.39.2
|
63 |
- Pytorch 2.2.2
|