Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -1,176 +1,192 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
-
|
45 |
-
```
|
46 |
-
|
47 |
-
```
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
####
|
69 |
-
|
70 |
-
|
71 |
-
|
|
72 |
-
|
73 |
-
|
|
74 |
-
|
|
75 |
-
|
|
76 |
-
|
|
77 |
-
|
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
|
92 |
-
|
93 |
-
|
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
--training_data
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
```shell
|
133 |
-
CUDA_VISIBLE_DEVICES=0 python
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
title: QuadrupedData
|
2 |
+
|
3 |
+
emoji: :turtle:
|
4 |
+
|
5 |
+
colorFrom: red
|
6 |
+
|
7 |
+
colorTo: purple
|
8 |
+
|
9 |
+
sdk: gradio
|
10 |
+
|
11 |
+
sdk_version: 3.8.2
|
12 |
+
|
13 |
+
app_file: app.py
|
14 |
+
|
15 |
+
pinned: false
|
16 |
+
|
17 |
+
# AdaCLIP (Detecting Anomalies for Novel Categories)
|
18 |
+
[]()
|
19 |
+
|
20 |
+
> [**ECCV 24**] [**AdaCLIP: Adapting CLIP with Hybrid Learnable Prompts for Zero-Shot Anomaly Detection**]().
|
21 |
+
>
|
22 |
+
> by [Yunkang Cao](https://caoyunkang.github.io/), [Jiangning Zhang](https://zhangzjn.github.io/), [Luca Frittoli](https://scholar.google.com/citations?user=cdML_XUAAAAJ),
|
23 |
+
> [Yuqi Cheng](https://scholar.google.com/citations?user=02BC-WgAAAAJ&hl=en), [Weiming Shen](https://scholar.google.com/citations?user=FuSHsx4AAAAJ&hl=en), [Giacomo Boracchi](https://boracchi.faculty.polimi.it/)
|
24 |
+
>
|
25 |
+
|
26 |
+
## Introduction
|
27 |
+
Zero-shot anomaly detection (ZSAD) targets the identification of anomalies within images from arbitrary novel categories.
|
28 |
+
This study introduces AdaCLIP for the ZSAD task, leveraging a pre-trained vision-language model (VLM), CLIP.
|
29 |
+
AdaCLIP incorporates learnable prompts into CLIP and optimizes them through training on auxiliary annotated anomaly detection data.
|
30 |
+
Two types of learnable prompts are proposed: \textit{static} and \textit{dynamic}. Static prompts are shared across all images, serving to preliminarily adapt CLIP for ZSAD.
|
31 |
+
In contrast, dynamic prompts are generated for each test image, providing CLIP with dynamic adaptation capabilities.
|
32 |
+
The combination of static and dynamic prompts is referred to as hybrid prompts, and yields enhanced ZSAD performance.
|
33 |
+
Extensive experiments conducted across 14 real-world anomaly detection datasets from industrial and medical domains indicate that AdaCLIP outperforms other ZSAD methods and can generalize better to different categories and even domains.
|
34 |
+
Finally, our analysis highlights the importance of diverse auxiliary data and optimized prompts for enhanced generalization capacity.
|
35 |
+
|
36 |
+
## Overview of AdaCLIP
|
37 |
+

|
38 |
+
|
39 |
+
## 🛠️ Getting Started
|
40 |
+
|
41 |
+
### Installation
|
42 |
+
To set up the AdaCLIP environment, follow one of the methods below:
|
43 |
+
|
44 |
+
- Clone this repo:
|
45 |
+
```shell
|
46 |
+
git clone https://github.com/caoyunkang/AdaCLIP.git && cd AdaCLIP
|
47 |
+
```
|
48 |
+
- You can use our provided installation script for an automated setup::
|
49 |
+
```shell
|
50 |
+
sh install.sh
|
51 |
+
```
|
52 |
+
- If you prefer to construct the experimental environment manually, follow these steps:
|
53 |
+
```shell
|
54 |
+
conda create -n AdaCLIP python=3.9.5 -y
|
55 |
+
conda activate AdaCLIP
|
56 |
+
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
|
57 |
+
pip install tqdm tensorboard setuptools==58.0.4 opencv-python scikit-image scikit-learn matplotlib seaborn ftfy regex numpy==1.26.4
|
58 |
+
pip install gradio # Optional, for app
|
59 |
+
```
|
60 |
+
- Remember to update the dataset root in config.py according to your preference:
|
61 |
+
```python
|
62 |
+
DATA_ROOT = '../datasets' # Original setting
|
63 |
+
```
|
64 |
+
|
65 |
+
### Dataset Preparation
|
66 |
+
Please download our processed visual anomaly detection datasets to your `DATA_ROOT` as needed.
|
67 |
+
|
68 |
+
#### Industrial Visual Anomaly Detection Datasets
|
69 |
+
Note: some links are still in processing...
|
70 |
+
|
71 |
+
| Dataset | Google Drive | Baidu Drive | Task
|
72 |
+
|------------|------------------|------------------| ------------------|
|
73 |
+
| MVTec AD | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection & Localization |
|
74 |
+
| VisA | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection & Localization |
|
75 |
+
| MPDD | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection & Localization |
|
76 |
+
| BTAD | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection & Localization |
|
77 |
+
| KSDD | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection & Localization |
|
78 |
+
| DAGM | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection & Localization |
|
79 |
+
| DTD-Synthetic | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection & Localization |
|
80 |
+
|
81 |
+
|
82 |
+
|
83 |
+
|
84 |
+
#### Medical Visual Anomaly Detection Datasets
|
85 |
+
| Dataset | Google Drive | Baidu Drive | Task
|
86 |
+
|------------|------------------|------------------| ------------------|
|
87 |
+
| HeadCT | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection |
|
88 |
+
| BrainMRI | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection |
|
89 |
+
| Br35H | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Detection |
|
90 |
+
| ISIC | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Localization |
|
91 |
+
| ColonDB | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Localization |
|
92 |
+
| ClinicDB | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Localization |
|
93 |
+
| TN3K | [Google Drive](链接) | [Baidu Drive](链接) | Anomaly Localization |
|
94 |
+
|
95 |
+
#### Custom Datasets
|
96 |
+
To use your custom dataset, follow these steps:
|
97 |
+
|
98 |
+
1. Refer to the instructions in `./data_preprocess` to generate the JSON file for your dataset.
|
99 |
+
2. Use `./dataset/base_dataset.py` to construct your own dataset.
|
100 |
+
|
101 |
+
|
102 |
+
### Weight Preparation
|
103 |
+
|
104 |
+
We offer various pre-trained weights on different auxiliary datasets.
|
105 |
+
Please download the pre-trained weights in `./weights`.
|
106 |
+
|
107 |
+
| Pre-trained Datasets | Google Drive | Baidu Drive
|
108 |
+
|------------|------------------|------------------|
|
109 |
+
| MVTec AD & ClinicDB | [Google Drive](https://drive.google.com/file/d/1xVXANHGuJBRx59rqPRir7iqbkYzq45W0/view?usp=drive_link) | [Baidu Drive](链接) |
|
110 |
+
| VisA & ColonDB | [Google Drive](https://drive.google.com/file/d/1QGmPB0ByPZQ7FucvGODMSz7r5Ke5wx9W/view?usp=drive_link) | [Baidu Drive](链接) |
|
111 |
+
| All Datasets Mentioned Above | [Google Drive](https://drive.google.com/file/d/1Cgkfx3GAaSYnXPLolx-P7pFqYV0IVzZF/view?usp=drive_link) | [Baidu Drive](链接) |
|
112 |
+
|
113 |
+
|
114 |
+
### Train
|
115 |
+
|
116 |
+
By default, we use MVTec AD & ClinicDB for training and VisA for validation:
|
117 |
+
```shell
|
118 |
+
CUDA_VISIBLE_DEVICES=0 python train.py --save_fig True --training_data mvtec colondb --testing_data visa
|
119 |
+
```
|
120 |
+
|
121 |
+
|
122 |
+
Alternatively, for evaluation on MVTec AD & ClinicDB, we use VisA & ColonDB for training and MVTec AD for validation.
|
123 |
+
```shell
|
124 |
+
CUDA_VISIBLE_DEVICES=0 python train.py --save_fig True --training_data visa clinicdb --testing_data mvtec
|
125 |
+
```
|
126 |
+
Since we have utilized half-precision (FP16) for training, the training process can occasionally be unstable.
|
127 |
+
It is recommended to run the training process multiple times and choose the best model based on performance
|
128 |
+
on the validation set as the final model.
|
129 |
+
|
130 |
+
|
131 |
+
To construct a robust ZSAD model for demonstration, we also train our AdaCLIP on all AD datasets mentioned above:
|
132 |
+
```shell
|
133 |
+
CUDA_VISIBLE_DEVICES=0 python train.py --save_fig True \
|
134 |
+
--training_data \
|
135 |
+
br35h brain_mri btad clinicdb colondb \
|
136 |
+
dagm dtd headct isic mpdd mvtec sdd tn3k visa \
|
137 |
+
--testing_data mvtec
|
138 |
+
```
|
139 |
+
|
140 |
+
### Test
|
141 |
+
|
142 |
+
Manually select the best models from the validation set and place them in the `weights/` directory. Then, run the following testing script:
|
143 |
+
```shell
|
144 |
+
sh test.sh
|
145 |
+
```
|
146 |
+
|
147 |
+
If you want to test on a single image, you can refer to `test_single_image.sh`:
|
148 |
+
```shell
|
149 |
+
CUDA_VISIBLE_DEVICES=0 python test.py --testing_model image --ckt_path weights/pretrained_all.pth --save_fig True \
|
150 |
+
--image_path asset/img.png --class_name candle --save_name test.png
|
151 |
+
```
|
152 |
+
|
153 |
+
## Main Results
|
154 |
+
|
155 |
+
Due to differences in versions utilized, the reported performance may vary slightly compared to the detection performance
|
156 |
+
with the provided pre-trained weights. Some categories may show higher performance while others may show lower.
|
157 |
+
|
158 |
+

|
159 |
+

|
160 |
+

|
161 |
+
|
162 |
+
### :page_facing_up: Demo App
|
163 |
+
|
164 |
+
To run the demo application, use the following command:
|
165 |
+
|
166 |
+
```bash
|
167 |
+
python app.py
|
168 |
+
```
|
169 |
+
|
170 |
+

|
171 |
+
|
172 |
+
## 💘 Acknowledgements
|
173 |
+
Our work is largely inspired by the following projects. Thanks for their admiring contribution.
|
174 |
+
|
175 |
+
- [VAND-APRIL-GAN](https://github.com/ByChelsea/VAND-APRIL-GAN)
|
176 |
+
- [AnomalyCLIP](https://github.com/zqhang/AnomalyCLIP)
|
177 |
+
- [SAA](https://github.com/caoyunkang/Segment-Any-Anomaly)
|
178 |
+
|
179 |
+
|
180 |
+
## Stargazers over time
|
181 |
+
[](https://starchart.cc/caoyunkang/AdaCLIP)
|
182 |
+
|
183 |
+
|
184 |
+
## Citation
|
185 |
+
|
186 |
+
If you find this project helpful for your research, please consider citing the following BibTeX entry.
|
187 |
+
|
188 |
+
```BibTex
|
189 |
+
|
190 |
+
|
191 |
+
|
192 |
+
```
|