taesiri commited on
Commit
628dcb1
1 Parent(s): 0256bce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -36,7 +36,7 @@ size_categories:
36
  ## Dataset Summary
37
 
38
 
39
- The **ImageNet-Hard** is a new benchmark that comprises 13,630 images, collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet).
40
  This dataset is challenging to state-of-the-art vision models, as merely zooming in often fails to enhance their ability to classify images correctly. Consequently, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving only `2.02%` accuracy.
41
 
42
 
@@ -49,16 +49,16 @@ This dataset is challenging to state-of-the-art vision models, as merely zooming
49
 
50
 
51
  | Model | Accuracy |
52
- | ------------------- | ----- |
53
- | ResNet-18 | 9.41 |
54
- | ResNet-50 | 12.56 |
55
- | ViT-B/32 | 15.95 |
56
- | VGG19 | 10.32 |
57
- | AlexNet | 6.35 |
58
- | CLIP-ViT-L/14@224px | 1.86 |
59
- | CLIP-ViT-L/14@336px | 2.02 |
60
- | EfficientNet-L2-Ns | 34.23 |
61
-
62
 
63
  **Evaluation Code**
64
 
 
36
  ## Dataset Summary
37
 
38
 
39
+ The **ImageNet-Hard** is a new benchmark that comprises 11,350 images, collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet).
40
  This dataset is challenging to state-of-the-art vision models, as merely zooming in often fails to enhance their ability to classify images correctly. Consequently, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving only `2.02%` accuracy.
41
 
42
 
 
49
 
50
 
51
  | Model | Accuracy |
52
+ | ------------------- | -------- |
53
+ | ResNet-18 | 11.11 |
54
+ | ResNet-50 | 14.91 |
55
+ | ViT-B/32 | 18.78 |
56
+ | VGG19 | 12.15 |
57
+ | AlexNet | 7.30 |
58
+ | EfficientNet-B7 | 18.02 |
59
+ | EfficientNet-L2-Ns | 38.79 |
60
+ | CLIP-ViT-L/14@224px | 2.11 |
61
+ | CLIP-ViT-L/14@336px | 2.30 |
62
 
63
  **Evaluation Code**
64