Jiaming94 commited on
Commit
4c7172a
·
verified ·
1 Parent(s): 98c77fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -3
README.md CHANGED
@@ -1,3 +1,39 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # AnyAttack: Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models
6
+
7
+ ## TL;DR
8
+ **AnyAttack** is a powerful adversarial attack model that can transform ordinary images into targeted adversarial examples capable of misleading Vision-Language Models (VLMs). By pre-training on the **LAION-400M dataset**, our model enables a benign image (e.g., a dog) to be misinterpreted by VLMs as any specified content (e.g., "this is violent content"), working across both **open-source** and **commercial** models.
9
+
10
+ ## Model Overview
11
+ **AnyAttack** is designed to generate adversarial examples efficiently and at scale. Unlike traditional adversarial methods, it does not require predefined labels and instead leverages a self-supervised adversarial noise generator trained on large-scale data.
12
+
13
+ For a detailed explanation of the **AnyAttack** framework and methodology, please visit our **[Project Page](https://jiamingzhang94.github.io/anyattack/)**.
14
+
15
+ ## 🔗 Links & Resources
16
+ - **Project Page:** [AnyAttack Website](https://jiamingzhang94.github.io/anyattack/)
17
+ - **Paper:** [arXiv](https://arxiv.org/abs/2410.05346/)
18
+ - **Code:** [GitHub](https://github.com/jiamingzhang94/AnyAttack/).
19
+
20
+ ## 📜 Citation
21
+ If you use **AnyAttack** in your research, please cite our work:
22
+ ```bibtex
23
+ @inproceedings{zhang2025anyattack,
24
+ title={Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models},
25
+ author={Zhang, Jiaming and Ye, Junhong and Ma, Xingjun and Li, Yige and Yang, Yunfan and Yunhao, Chen and Sang, Jitao and Yeung, Dit-Yan},
26
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
27
+ year={2025}
28
+ }
29
+ ```
30
+
31
+ ## ⚠️ Disclaimer
32
+ This model is intended **for research purposes only**. The misuse of adversarial attacks can have ethical and legal implications. Please use responsibly.
33
+
34
+ ---
35
+
36
+ ### ⭐ If you find this model useful, please give it a star on Hugging Face! ⭐
37
+
38
+
39
+