Text-to-Image
Diffusers
Safetensors
English
mhdang commited on
Commit
7dfa0df
1 Parent(s): 92fc2cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ Direct Preference Optimization (DPO) for text-to-image diffusion models is a met
16
  This model is fine-tuned from [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
17
 
18
  ## Code
19
- *Code will come soon!!!*
20
 
21
  ## SD1.5
22
  We also have a model finedtuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) available at [dpo-sd1.5-text2image-v1](https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1).
 
16
  This model is fine-tuned from [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
17
 
18
  ## Code
19
+ The code is available [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/diffusion_dpo).
20
 
21
  ## SD1.5
22
  We also have a model finedtuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) available at [dpo-sd1.5-text2image-v1](https://huggingface.co/mhdang/dpo-sd1.5-text2image-v1).