Datasets:

Modalities:
Image
ArXiv:
License:
Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
End of preview.

CC1M-Adv-F: Million-Scale Datasets for Adversarial Robustness Evaluation of Vision Models

πŸ“Œ Dataset Description

Current evaluations of adversarial robustness in vision models are often limited in scale, typically relying on subsets of datasets like CIFAR-10 or ImageNet. We believe that large-scale assessments, on the order of millions of samples, are essential for advancing the field. To enable large-scale testing of adversarial robustness in vision models, we have constructed a new dataset, CC1M, based on CC3M, by removing outlier images and keeping only one million images.

Building on CC1M, we have created two adversarial variants: CC1M-Adv-C and CC1M-Adv-F, which are designed for different types of vision models.

  • CC1M-Adv-C is intended for evaluating image classification models. It was created by Probability Margin Attack (PMA) [1], a novel attack method we propose for image classification models. We also use a Surrogate Ensemble technique to boost the transferability of the adversarial noise to different models.

  • CC1M-Adv-F is designed for testing non-classification models. It was created by a Downstream Transfer Attack (DTA) [2]. We also use the Surrogate Ensemble technique to boost the transferability across datasets and models.

Demo evaluations can be found on our Vision Safety Platform.

πŸ₯ CC1M

πŸ“Œ Dataset Description

CC1M is a large-scale evaluation dataset containing one million images selected from CC3M, which consists of image-caption pairs that cover a wide range of objects, scenes, and visual concepts. To construct CC1M, we first removed unavailable images, such as those displaying a β€œthis image is unavailable” message due to expired URLs. Next, we filtered out noisy images that lack meaningful semantic content, such as random icons. To further refine the dataset, we applied the Local Intrinsic Dimensionality (LID) metric, which is effective for detecting adversarial images, backdoor images, and low-quality data that could negatively impact self-supervised contrastive learning. By calculating LID scores based on CLIP embeddings, we identified outliers using the Median Absolute Deviation (MAD) method and retained only those images whose LID scores were close to the median. This approach ensures that CC1M consists of high-quality, diverse images, making it well-suited for robust evaluation tasks.

πŸ“‚ File Structure

cc1m
    |--000000000.jpg
    |--000000001.jpg
    |--000000002.jpg
    ...

🐞 CC1M-Adv-F

πŸ“Œ Dataset Description

CC1M-Adv-F is created to evalute vision feature extractors. It was generated based on pre-trained ViTs on ImageNet by our Downstream Transfer Attack (DTA), which identifies the most vulnerable and transferable layer of pre-trained (by MAE or SimCLR) ViTs. We improves the transferability of the generated adversarial examples by perturbing the feature layer in parallel using multiple pre-trained image encoders. For this, we selected eight widely used feature extractors from the timm library, which encompass a variety of model architectures and pre-training methods. The table below provides an overview of the surrogate models we used.

Surrogate Model Paper
vgg16 Very deep convolutional networks for large-scale image recognition
resnet101 Deep residual learning for image recognition
efficient net Efficientnet: Rethinking model scaling for convolutional neural networks
convnext_base A convnet for the 2020s
vit_base_patch16_224 An image is worth 16x16 words: Transformers for image recognition at scale
vit_base_patch16_224.dino Emerging properties in self-supervised vision transformers
beit_base_patch16_224 Beit: Bert pre-training of image transformers
swin_base_patch4_window7_224 Swin transformer: Hierarchical vision transformer using shifted windows

πŸ“‚ File Structure

cc1m_adv_f
    |--000000000.jpg
    |--000000001.jpg
    |--000000002.jpg
    ...

πŸ“‚ Examples

θΏ™ζ˜―δΈ€εΌ η€ΊδΎ‹ε›Ύη‰‡ θΏ™ζ˜―δΈ€εΌ η€ΊδΎ‹ε›Ύη‰‡ θΏ™ζ˜―δΈ€εΌ η€ΊδΎ‹ε›Ύη‰‡ θΏ™ζ˜―δΈ€εΌ η€ΊδΎ‹ε›Ύη‰‡ θΏ™ζ˜―δΈ€εΌ η€ΊδΎ‹ε›Ύη‰‡

🌲 Usage

We provide example code for using CC1M-Adv-F.

⭐ Acknowledgements

Our work is created based on the CC3M dataset:

website: https://ai.google.com/research/ConceptualCaptions/

paper: https://aclanthology.org/P18-1238/

GitHub: https://github.com/google-research-datasets/conceptual-captions

πŸ“œ Cite Us

If you use this dataset in your research, please cite it as follows:

[1]
@article{xie2024towards,
  title={Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks},
  author={Xie, Yong and Zheng, Weijie and Huang, Hanxun and Ye, Guangnan and Ma, Xingjun},
  journal={arXiv:2411.15210},
  year={2024}
}

[2]
@article{zheng2024downstream,
  title={Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers},
  author={Zheng, Weijie and Ma, Xingjun and Huang, Hanxun and Wu, Zuxuan and Jiang, Yu-Gang},
  journal={arXiv:2408.01705},
  year={2024}
}
Downloads last month
31