itsLeen commited on
Commit
b57aad7
1 Parent(s): c2b9437

Model save

Browse files
README.md CHANGED
@@ -1,29 +1,14 @@
1
  ---
 
2
  license: apache-2.0
3
- base_model: dima806/deepfake_vs_real_image_detection
4
  tags:
5
- - image-classification
6
  - generated_from_trainer
7
- datasets:
8
- - imagefolder
9
  metrics:
10
  - accuracy
11
  model-index:
12
  - name: realFake-img
13
- results:
14
- - task:
15
- name: Image Classification
16
- type: image-classification
17
- dataset:
18
- name: ai_real_images
19
- type: imagefolder
20
- config: default
21
- split: train
22
- args: default
23
- metrics:
24
- - name: Accuracy
25
- type: accuracy
26
- value: 0.8518181818181818
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  # realFake-img
33
 
34
- This model is a fine-tuned version of [dima806/deepfake_vs_real_image_detection](https://huggingface.co/dima806/deepfake_vs_real_image_detection) on the ai_real_images dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.3329
37
- - Accuracy: 0.8518
38
 
39
  ## Model description
40
 
@@ -59,33 +44,57 @@ The following hyperparameters were used during training:
59
  - seed: 42
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
- - num_epochs: 4
63
  - mixed_precision_training: Native AMP
64
 
65
  ### Training results
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
  |:-------------:|:------:|:----:|:---------------:|:--------:|
69
- | 0.4892 | 0.2564 | 100 | 0.5756 | 0.7227 |
70
- | 0.683 | 0.5128 | 200 | 0.6742 | 0.6373 |
71
- | 0.3737 | 0.7692 | 300 | 0.5462 | 0.7555 |
72
- | 0.3554 | 1.0256 | 400 | 0.4354 | 0.8009 |
73
- | 0.2368 | 1.2821 | 500 | 0.4046 | 0.8309 |
74
- | 0.3696 | 1.5385 | 600 | 0.5547 | 0.7809 |
75
- | 0.2824 | 1.7949 | 700 | 0.3329 | 0.8518 |
76
- | 0.2366 | 2.0513 | 800 | 0.4582 | 0.8255 |
77
- | 0.2212 | 2.3077 | 900 | 0.4885 | 0.8255 |
78
- | 0.2031 | 2.5641 | 1000 | 0.4282 | 0.8564 |
79
- | 0.1717 | 2.8205 | 1100 | 0.4373 | 0.85 |
80
- | 0.1303 | 3.0769 | 1200 | 0.3659 | 0.8718 |
81
- | 0.0889 | 3.3333 | 1300 | 0.3663 | 0.8736 |
82
- | 0.1157 | 3.5897 | 1400 | 0.4588 | 0.8436 |
83
- | 0.1215 | 3.8462 | 1500 | 0.4350 | 0.8655 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
 
85
 
86
  ### Framework versions
87
 
88
- - Transformers 4.42.4
89
  - Pytorch 2.4.0+cu121
90
  - Datasets 2.21.0
91
  - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
+ base_model: google/vit-base-patch16-224
5
  tags:
 
6
  - generated_from_trainer
 
 
7
  metrics:
8
  - accuracy
9
  model-index:
10
  - name: realFake-img
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  # realFake-img
18
 
19
+ This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0988
22
+ - Accuracy: 0.9785
23
 
24
  ## Model description
25
 
 
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - num_epochs: 10
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:------:|:----:|:---------------:|:--------:|
54
+ | 0.2578 | 0.2525 | 100 | 0.1594 | 0.9418 |
55
+ | 0.0944 | 0.5051 | 200 | 0.2243 | 0.9373 |
56
+ | 0.1747 | 0.7576 | 300 | 0.2472 | 0.9293 |
57
+ | 0.1328 | 1.0101 | 400 | 0.1774 | 0.9338 |
58
+ | 0.1918 | 1.2626 | 500 | 0.1282 | 0.9570 |
59
+ | 0.169 | 1.5152 | 600 | 0.2247 | 0.9346 |
60
+ | 0.2595 | 1.7677 | 700 | 0.1785 | 0.9445 |
61
+ | 0.0911 | 2.0202 | 800 | 0.1353 | 0.9534 |
62
+ | 0.0548 | 2.2727 | 900 | 0.1998 | 0.9472 |
63
+ | 0.1399 | 2.5253 | 1000 | 0.1971 | 0.9445 |
64
+ | 0.2001 | 2.7778 | 1100 | 0.2479 | 0.9373 |
65
+ | 0.0976 | 3.0303 | 1200 | 0.1601 | 0.9499 |
66
+ | 0.1291 | 3.2828 | 1300 | 0.1607 | 0.9588 |
67
+ | 0.0721 | 3.5354 | 1400 | 0.1822 | 0.9588 |
68
+ | 0.0592 | 3.7879 | 1500 | 0.1255 | 0.9624 |
69
+ | 0.0964 | 4.0404 | 1600 | 0.1620 | 0.9543 |
70
+ | 0.0738 | 4.2929 | 1700 | 0.1279 | 0.9651 |
71
+ | 0.0504 | 4.5455 | 1800 | 0.1624 | 0.9588 |
72
+ | 0.0972 | 4.7980 | 1900 | 0.1579 | 0.9624 |
73
+ | 0.0456 | 5.0505 | 2000 | 0.1965 | 0.9490 |
74
+ | 0.0334 | 5.3030 | 2100 | 0.1652 | 0.9570 |
75
+ | 0.0242 | 5.5556 | 2200 | 0.1182 | 0.9749 |
76
+ | 0.0715 | 5.8081 | 2300 | 0.1250 | 0.9651 |
77
+ | 0.0407 | 6.0606 | 2400 | 0.1172 | 0.9696 |
78
+ | 0.0003 | 6.3131 | 2500 | 0.0819 | 0.9785 |
79
+ | 0.0072 | 6.5657 | 2600 | 0.1406 | 0.9714 |
80
+ | 0.0183 | 6.8182 | 2700 | 0.1152 | 0.9749 |
81
+ | 0.0021 | 7.0707 | 2800 | 0.1368 | 0.9731 |
82
+ | 0.046 | 7.3232 | 2900 | 0.0900 | 0.9794 |
83
+ | 0.033 | 7.5758 | 3000 | 0.1014 | 0.9785 |
84
+ | 0.0354 | 7.8283 | 3100 | 0.0968 | 0.9767 |
85
+ | 0.0026 | 8.0808 | 3200 | 0.1217 | 0.9731 |
86
+ | 0.0002 | 8.3333 | 3300 | 0.0828 | 0.9794 |
87
+ | 0.0006 | 8.5859 | 3400 | 0.0926 | 0.9794 |
88
+ | 0.0006 | 8.8384 | 3500 | 0.1001 | 0.9794 |
89
+ | 0.0006 | 9.0909 | 3600 | 0.0863 | 0.9848 |
90
+ | 0.0633 | 9.3434 | 3700 | 0.0911 | 0.9803 |
91
+ | 0.0009 | 9.5960 | 3800 | 0.0941 | 0.9821 |
92
+ | 0.0247 | 9.8485 | 3900 | 0.0988 | 0.9785 |
93
 
94
 
95
  ### Framework versions
96
 
97
+ - Transformers 4.44.2
98
  - Pytorch 2.4.0+cu121
99
  - Datasets 2.21.0
100
  - Tokenizers 0.19.1
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "dima806/deepfake_vs_real_image_detection",
3
  "architectures": [
4
  "ViTForImageClassification"
5
  ],
@@ -9,15 +9,15 @@
9
  "hidden_dropout_prob": 0.0,
10
  "hidden_size": 768,
11
  "id2label": {
12
- "0": "AiArtData",
13
- "1": "RealArt"
14
  },
15
  "image_size": 224,
16
  "initializer_range": 0.02,
17
  "intermediate_size": 3072,
18
  "label2id": {
19
- "AiArtData": "0",
20
- "RealArt": "1"
21
  },
22
  "layer_norm_eps": 1e-12,
23
  "model_type": "vit",
@@ -28,5 +28,5 @@
28
  "problem_type": "single_label_classification",
29
  "qkv_bias": true,
30
  "torch_dtype": "float32",
31
- "transformers_version": "4.42.4"
32
  }
 
1
  {
2
+ "_name_or_path": "google/vit-base-patch16-224",
3
  "architectures": [
4
  "ViTForImageClassification"
5
  ],
 
9
  "hidden_dropout_prob": 0.0,
10
  "hidden_size": 768,
11
  "id2label": {
12
+ "0": "Fake",
13
+ "1": "Real"
14
  },
15
  "image_size": 224,
16
  "initializer_range": 0.02,
17
  "intermediate_size": 3072,
18
  "label2id": {
19
+ "Fake": "0",
20
+ "Real": "1"
21
  },
22
  "layer_norm_eps": 1e-12,
23
  "model_type": "vit",
 
28
  "problem_type": "single_label_classification",
29
  "qkv_bias": true,
30
  "torch_dtype": "float32",
31
+ "transformers_version": "4.44.2"
32
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6117692cdc1f7f9d4df4e0157ae89b660794059f08686bfe7ea2acee9bb72297
3
  size 343223968
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f41508a1be5071cec6928232738f3de3bcdc41ce4598a2a58f2613004a6409cb
3
  size 343223968
runs/Sep07_18-57-16_6d5e3fd650ca/events.out.tfevents.1725735452.6d5e3fd650ca.1013.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:140d9dfa18fb02bd0f657ef0c0e90e26f4dae54d05d603b60af6946741c11319
3
+ size 121799
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7f170978f3e0b7b6312fe562b8268fb5e8464ea44029c27506135d1446e5018a
3
- size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93e6eb44065ffcb1aacf2660e126ba36b49f18377071489c44b6c19b7b9ba491
3
+ size 5176