Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -36,8 +36,8 @@ size_categories:
|
|
36 |
## Dataset Summary
|
37 |
|
38 |
|
39 |
-
The **ImageNet-Hard** is a new benchmark that comprises
|
40 |
-
This dataset is challenging
|
41 |
|
42 |
|
43 |
### Dataset Distribution
|
@@ -50,15 +50,15 @@ This dataset is challenging to state-of-the-art vision models, as merely zooming
|
|
50 |
|
51 |
| Model | Accuracy |
|
52 |
| ------------------- | -------- |
|
53 |
-
| ResNet-18 |
|
54 |
-
| ResNet-50 | 14.
|
55 |
-
| ViT-B/32 | 18.
|
56 |
-
| VGG19 |
|
57 |
-
| AlexNet | 7.
|
58 |
-
| EfficientNet-B7 |
|
59 |
-
| EfficientNet-L2-Ns |
|
60 |
-
| CLIP-ViT-L/14@224px |
|
61 |
-
| CLIP-ViT-L/14@336px | 2.
|
62 |
|
63 |
**Evaluation Code**
|
64 |
|
|
|
36 |
## Dataset Summary
|
37 |
|
38 |
|
39 |
+
The **ImageNet-Hard** is a new benchmark that comprises 10,980 images, collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet).
|
40 |
+
This dataset is challenging for state-of-the-art vision models, as merely zooming in often fails to enhance their ability to classify images correctly. Consequently, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving only `2.02%` accuracy.
|
41 |
|
42 |
|
43 |
### Dataset Distribution
|
|
|
50 |
|
51 |
| Model | Accuracy |
|
52 |
| ------------------- | -------- |
|
53 |
+
| ResNet-18 | 10.86 |
|
54 |
+
| ResNet-50 | 14.74 |
|
55 |
+
| ViT-B/32 | 18.52 |
|
56 |
+
| VGG19 | 11.99 |
|
57 |
+
| AlexNet | 7.34 |
|
58 |
+
| EfficientNet-B7 | 17.81 |
|
59 |
+
| EfficientNet-L2-Ns | 39.00 |
|
60 |
+
| CLIP-ViT-L/14@224px | 1.86 |
|
61 |
+
| CLIP-ViT-L/14@336px | 2.02 |
|
62 |
|
63 |
**Evaluation Code**
|
64 |
|