Merge branch 'main' of https://huggingface.co/tianweiy/DMD2 into main
Browse files
README.md
CHANGED
@@ -22,9 +22,9 @@ Tianwei Yin [tianweiy@mit.edu](mailto:tianweiy@mit.edu)
|
|
22 |
|
23 |
## Huggingface Demo
|
24 |
|
25 |
-
Our 4-step (much higher quality, 2X slower) Text-to-Image demo is hosted at [DMD2-4step](https://
|
26 |
|
27 |
-
Our 1-step Text-to-Image demo is hosted at [DMD2-1step](https://
|
28 |
|
29 |
## Usage
|
30 |
|
@@ -46,7 +46,9 @@ unet.load_state_dict(torch.load(hf_hub_download(repo_name, ckpt_name), map_locat
|
|
46 |
pipe = DiffusionPipeline.from_pretrained(base_model_id, unet=unet, torch_dtype=torch.float16, variant="fp16").to("cuda")
|
47 |
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
|
48 |
prompt="a photo of a cat"
|
49 |
-
|
|
|
|
|
50 |
```
|
51 |
|
52 |
#### 1-step generation
|
|
|
22 |
|
23 |
## Huggingface Demo
|
24 |
|
25 |
+
Our 4-step (much higher quality, 2X slower) Text-to-Image demo is hosted at [DMD2-4step](https://6cf215173601f32482.gradio.live)
|
26 |
|
27 |
+
Our 1-step Text-to-Image demo is hosted at [DMD2-1step](https://cc2622c0c132346c64.gradio.live)
|
28 |
|
29 |
## Usage
|
30 |
|
|
|
46 |
pipe = DiffusionPipeline.from_pretrained(base_model_id, unet=unet, torch_dtype=torch.float16, variant="fp16").to("cuda")
|
47 |
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
|
48 |
prompt="a photo of a cat"
|
49 |
+
|
50 |
+
# LCMScheduler's default timesteps are different from the one we used for training
|
51 |
+
image=pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0, timesteps=[999, 749, 499, 249]).images[0]
|
52 |
```
|
53 |
|
54 |
#### 1-step generation
|