hbXNov commited on
Commit
a41b6a9
·
1 Parent(s): d7fa5c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -9,8 +9,9 @@ Colab Notebook for Data Generation: https://colab.research.google.com/drive/1I2I
9
 
10
  Finetuning Recipe:
11
  1. We finetune the Stable Diffusion V1.5 model for 1 epoch on the complete ImageNet-1K training dataset, which contains ~1.3M images. The model was finetuned on a single 24GB A5000 GPU. It took us ~1day to complete the finetuning.
12
- 2. The finetuning code was adopted directly from the Huggingface Diffusers library - https://github.com/huggingface/diffusers/tree/main/examples/text_to_image. Our adopted code is present at XXXX
13
- 3. During finetuning, we (a) do not enable --use_ema, (b) do not use gradient checkpoint, (c) use a lower learning rate = 1e-6, (d) use a 'cosine' learning rate schedule with 0 warmup steps, (e) enable --use_8bit_adam from bitsandbytes.
 
14
 
15
 
16
  Post-finetuning, we repeatedly sample the data from the generative model to generate 1.3M training and 50K validation images.
 
9
 
10
  Finetuning Recipe:
11
  1. We finetune the Stable Diffusion V1.5 model for 1 epoch on the complete ImageNet-1K training dataset, which contains ~1.3M images. The model was finetuned on a single 24GB A5000 GPU. It took us ~1day to complete the finetuning.
12
+ 2. The finetuning code was adopted directly from the Huggingface Diffusers library - https://github.com/huggingface/diffusers/tree/main/examples/text_to_image.
13
+ 3. Link to our GitHub code: https://github.com/Hritikbansal/generative-robustness/tree/main/sd_finetune
14
+ 4. The complete set of finetuning arguments are present here - https://docs.google.com/document/d/17ggIdEuhAS0rhX7gIFp2q6H0JjkpERYFkCLTO_MtdgY/edit?usp=sharing
15
 
16
 
17
  Post-finetuning, we repeatedly sample the data from the generative model to generate 1.3M training and 50K validation images.