FantasticGNU commited on
Commit
384f77f
1 Parent(s): 0ab93b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -16
README.md CHANGED
@@ -27,16 +27,18 @@ Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Ming Tang, Jinqiao Wang
27
 
28
  * <a href='#introduction'>1. Introduction</a>
29
  * <a href='#environment'>2. Running AnomalyGPT Demo</a>
30
- * <a href='#install_environment'>2.1. Environment Installation</a>
31
- * <a href='#download_imagebind_model'>2.2. Prepare ImageBind Checkpoint</a>
32
- * <a href='#download_vicuna_model'>2.3. Prepare Vicuna Checkpoint</a>
33
- * <a href='#download_anomalygpt'>2.4. Prepare Delta Weights of AnomalyGPT</a>
34
- * <a href='#running_demo'>2.5. Deploying Demo</a>
35
  * <a href='#train_anomalygpt'>3. Train Your Own AnomalyGPT</a>
36
- * <a href='#data_preparation'>3.1. Data Preparation</a>
37
- * <a href='#training_configurations'>3.2. Training Configurations</a>
38
- * <a href='#model_training'>3.3. Training AnoamlyGPT</a>
39
- <!-- * <a href='#results'>4. Results</a> -->
 
 
40
  * <a href='#license'>License</a>
41
  * <a href='#citation'>Citation</a>
42
  * <a href='#acknowledgments'>Acknowledgments</a>
@@ -66,7 +68,7 @@ We leverage a pre-trained image encoder and a Large Language Model (LLM) to alig
66
 
67
  <span id='install_environment'/>
68
 
69
- #### 2.1. Environment Installation
70
 
71
  Clone the repository locally:
72
 
@@ -82,19 +84,19 @@ pip install -r requirements.txt
82
 
83
  <span id='download_imagebind_model'/>
84
 
85
- #### 2.2. Prepare ImageBind Checkpoint:
86
 
87
  You can download the pre-trained ImageBind model using [this link](https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth). After downloading, put the downloaded file (imagebind_huge.pth) in [[./pretrained_ckpt/imagebind_ckpt/]](./pretrained_ckpt/imagebind_ckpt/) directory.
88
 
89
  <span id='download_vicuna_model'/>
90
 
91
- #### 2.3. Prepare Vicuna Checkpoint:
92
 
93
  To prepare the pre-trained Vicuna model, please follow the instructions provided [[here]](./pretrained_ckpt#1-prepare-vicuna-checkpoint).
94
 
95
  <span id='download_anomalygpt'/>
96
 
97
- #### 2.4. Prepare Delta Weights of AnomalyGPT:
98
 
99
  We use the pre-trained parameters from [PandaGPT](https://github.com/yxuansu/PandaGPT) to initialize our model. You can get the weights of PandaGPT trained with different strategies in the table below. In our experiments and online demo, we use the Vicuna-7B and `openllmplayground/pandagpt_7b_max_len_1024` due to the limitation of computation resource. Better results are expected if switching to Vicuna-13B.
100
 
@@ -121,7 +123,7 @@ In our [online demo](), we use the supervised setting as our default model to at
121
 
122
  <span id='running_demo'/>
123
 
124
- #### 2.5. Deploying Demo
125
 
126
  Upon completion of previous steps, you can run the demo locally as
127
  ```bash
@@ -139,7 +141,7 @@ python web_demo.py
139
 
140
  <span id='data_preparation'/>
141
 
142
- #### 3.1. Data Preparation:
143
 
144
  You can download MVTec-AD dataset from [[this link]](https://www.mvtec.com/company/research/datasets/mvtec-ad/downloads) and VisA from [[this link]](https://github.com/amazon-science/spot-diff). You can also download pre-training data of PandaGPT from [[here]](https://huggingface.co/datasets/openllmplayground/pandagpt_visual_instruction_dataset/tree/main). After downloading, put the data in the [[./data]](./data/) directory.
145
 
@@ -189,7 +191,7 @@ The table below show the training hyperparameters used in our experiments. The h
189
 
190
  <span id='model_training'/>
191
 
192
- #### 3.3. Training AnomalyGPT
193
 
194
  To train AnomalyGPT on MVTec-AD dataset, please run the following commands:
195
  ```yaml
@@ -208,6 +210,23 @@ The key arguments of the training script are as follows:
208
 
209
  Note that the epoch number can be set in the `epochs` argument at [./code/config/openllama_peft.yaml](./code/config/openllama_peft.yaml) file and the learning rate can be set in [./code/dsconfig/openllama_peft_stage_1.json](./code/dsconfig/openllama_peft_stage_1.json)
210
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
211
  <!-- ****
212
 
213
  <span id='results'/>
 
27
 
28
  * <a href='#introduction'>1. Introduction</a>
29
  * <a href='#environment'>2. Running AnomalyGPT Demo</a>
30
+ * <a href='#install_environment'>2.1 Environment Installation</a>
31
+ * <a href='#download_imagebind_model'>2.2 Prepare ImageBind Checkpoint</a>
32
+ * <a href='#download_vicuna_model'>2.3 Prepare Vicuna Checkpoint</a>
33
+ * <a href='#download_anomalygpt'>2.4 Prepare Delta Weights of AnomalyGPT</a>
34
+ * <a href='#running_demo'>2.5 Deploying Demo</a>
35
  * <a href='#train_anomalygpt'>3. Train Your Own AnomalyGPT</a>
36
+ * <a href='#data_preparation'>3.1 Data Preparation</a>
37
+ * <a href='#training_configurations'>3.2 Training Configurations</a>
38
+ * <a href='#model_training'>3.3 Training AnoamlyGPT</a>
39
+
40
+ * <a href='#examples'>4. Examples</a>
41
+ <!-- * <a href='#results'>5. Results</a> -->
42
  * <a href='#license'>License</a>
43
  * <a href='#citation'>Citation</a>
44
  * <a href='#acknowledgments'>Acknowledgments</a>
 
68
 
69
  <span id='install_environment'/>
70
 
71
+ #### 2.1 Environment Installation
72
 
73
  Clone the repository locally:
74
 
 
84
 
85
  <span id='download_imagebind_model'/>
86
 
87
+ #### 2.2 Prepare ImageBind Checkpoint:
88
 
89
  You can download the pre-trained ImageBind model using [this link](https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth). After downloading, put the downloaded file (imagebind_huge.pth) in [[./pretrained_ckpt/imagebind_ckpt/]](./pretrained_ckpt/imagebind_ckpt/) directory.
90
 
91
  <span id='download_vicuna_model'/>
92
 
93
+ #### 2.3 Prepare Vicuna Checkpoint:
94
 
95
  To prepare the pre-trained Vicuna model, please follow the instructions provided [[here]](./pretrained_ckpt#1-prepare-vicuna-checkpoint).
96
 
97
  <span id='download_anomalygpt'/>
98
 
99
+ #### 2.4 Prepare Delta Weights of AnomalyGPT:
100
 
101
  We use the pre-trained parameters from [PandaGPT](https://github.com/yxuansu/PandaGPT) to initialize our model. You can get the weights of PandaGPT trained with different strategies in the table below. In our experiments and online demo, we use the Vicuna-7B and `openllmplayground/pandagpt_7b_max_len_1024` due to the limitation of computation resource. Better results are expected if switching to Vicuna-13B.
102
 
 
123
 
124
  <span id='running_demo'/>
125
 
126
+ #### 2.5 Deploying Demo
127
 
128
  Upon completion of previous steps, you can run the demo locally as
129
  ```bash
 
141
 
142
  <span id='data_preparation'/>
143
 
144
+ #### 3.1 Data Preparation:
145
 
146
  You can download MVTec-AD dataset from [[this link]](https://www.mvtec.com/company/research/datasets/mvtec-ad/downloads) and VisA from [[this link]](https://github.com/amazon-science/spot-diff). You can also download pre-training data of PandaGPT from [[here]](https://huggingface.co/datasets/openllmplayground/pandagpt_visual_instruction_dataset/tree/main). After downloading, put the data in the [[./data]](./data/) directory.
147
 
 
191
 
192
  <span id='model_training'/>
193
 
194
+ #### 3.3 Training AnomalyGPT
195
 
196
  To train AnomalyGPT on MVTec-AD dataset, please run the following commands:
197
  ```yaml
 
210
 
211
  Note that the epoch number can be set in the `epochs` argument at [./code/config/openllama_peft.yaml](./code/config/openllama_peft.yaml) file and the learning rate can be set in [./code/dsconfig/openllama_peft_stage_1.json](./code/dsconfig/openllama_peft_stage_1.json)
212
 
213
+ ****
214
+
215
+ <span id='examples'/>
216
+
217
+ ### 4. Examples
218
+
219
+ ![](./images/demo_1.png)
220
+ ****
221
+ ![](./images/demo_5.png)
222
+ ****
223
+ ![](./images/demo_2.png)
224
+ ****
225
+ ![](./images/demo_4.png)
226
+ ****
227
+ ![](./images/demo_3.png)
228
+
229
+
230
  <!-- ****
231
 
232
  <span id='results'/>