Adapter commited on
Commit
37dfce1
·
1 Parent(s): f4abe09

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -23
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
 
13
  T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
14
 
15
- This checkpoint provides conditioning on canny for the StableDiffusionXL checkpoint.
16
 
17
  ## Model Details
18
  - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
@@ -31,14 +31,17 @@ This checkpoint provides conditioning on canny for the StableDiffusionXL checkpo
31
  primaryClass={cs.CV}
32
  }
33
 
 
34
  ### Checkpoints
35
 
36
  | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
37
  |---|---|---|---|
38
- |[Adapter/t2iadapter_canny_sdxlv1](https://huggingface.co/Adapter/t2iadapter_canny_sdxlv1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href=""><img width="64" style="margin:0;padding:0;" src=""/></a>|<a href=""><img width="64" src=""/></a>|
39
- |[Adapter/t2iadapter_sketch_sdxlv1](https://huggingface.co/Adapter/t2iadapter_sketch_sdxlv1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href=""><img width="64" style="margin:0;padding:0;" src=""/></a>|<a href=""><img width="64" src=""/></a>|
40
- |[Adapter/t2iadapter_depth_sdxlv1](https://huggingface.co/Adapter/t2iadapter_depth_sdxlv1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href=""><img width="64" src=""/></a>|<a href=""><img width="64" src=""/></a>|
41
- |[Adapter/t2iadapter_openpose_sdxlv1](https://huggingface.co/Adapter/t2iadapter_openpose_sdxlv1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href=""><img width="64" src=""/></a>|<a href=""><img width="64" src=""/></a>|
 
 
42
 
43
 
44
  ## Example
@@ -54,42 +57,47 @@ pip install transformers accelerate safetensors
54
  1. Images are first downloaded into the appropriate *control image* format.
55
  2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
56
 
57
- Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/Adapter/t2iadapter_canny_sdxlv1).
58
 
 
59
  ```py
60
- from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler
61
  from diffusers.utils import load_image, make_image_grid
62
- from controlnet_aux.zoe import MidasDetector
 
63
 
64
  # load adapter
65
  adapter = T2IAdapter.from_pretrained(
66
- "Adapter/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
67
  ).to("cuda")
68
 
69
  # load euler_a scheduler
70
  model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
71
  euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
72
- vae= AutoencoderKL.from_pretrained(
73
- "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16
74
- )
75
  pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
76
- model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
77
  ).to("cuda")
78
  pipe.enable_xformers_memory_efficient_attention()
79
 
80
-
81
  midas_depth = MidasDetector.from_pretrained(
82
  "valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
83
  ).to("cuda")
 
84
 
85
-
86
- url = "https://raw.githubusercontent.com/lllyasviel/ControlNet/main/test_imgs/cyber.png"
 
87
  image = load_image(url)
88
  image = midas_depth(
89
  image, detect_resolution=512, image_resolution=1024
90
- ).resize((896, 1152))
 
 
91
 
92
- prompt = "a robot, mount fuji in the background, 4k photo, highly detailed"
 
 
93
  negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
94
 
95
  gen_images = pipe(
@@ -97,8 +105,9 @@ gen_images = pipe(
97
  negative_prompt=negative_prompt,
98
  image=image,
99
  num_inference_steps=30,
100
- adapter_conditioning_scale=1,
101
- cond_tau=1
102
- ).images
103
- gen_images[0]
104
- ```
 
 
12
 
13
  T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
14
 
15
+ This checkpoint provides conditioning on depth for the StableDiffusionXL checkpoint.
16
 
17
  ## Model Details
18
  - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
 
31
  primaryClass={cs.CV}
32
  }
33
 
34
+
35
  ### Checkpoints
36
 
37
  | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
38
  |---|---|---|---|
39
+ |[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>|
40
+ |[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>|
41
+ |[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>|
42
+ |[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>|
43
+ |[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>|
44
+ |[Adapter/t2iadapter_openpose_sdxlv1](https://huggingface.co/Adapter/t2iadapter_openpose_sdxlv1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>|
45
 
46
 
47
  ## Example
 
57
  1. Images are first downloaded into the appropriate *control image* format.
58
  2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
59
 
60
+ Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0).
61
 
62
+ - Dependency
63
  ```py
64
+ from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL
65
  from diffusers.utils import load_image, make_image_grid
66
+ from controlnet_aux.midas import MidasDetector
67
+ import torch
68
 
69
  # load adapter
70
  adapter = T2IAdapter.from_pretrained(
71
+ "TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
72
  ).to("cuda")
73
 
74
  # load euler_a scheduler
75
  model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
76
  euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
77
+ vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
 
 
78
  pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
79
+ model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
80
  ).to("cuda")
81
  pipe.enable_xformers_memory_efficient_attention()
82
 
 
83
  midas_depth = MidasDetector.from_pretrained(
84
  "valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
85
  ).to("cuda")
86
+ ```
87
 
88
+ - Condition Image
89
+ ```py
90
+ url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_mid.jpg"
91
  image = load_image(url)
92
  image = midas_depth(
93
  image, detect_resolution=512, image_resolution=1024
94
+ )
95
+ ```
96
+ <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>
97
 
98
+ - Generation
99
+ ```py
100
+ prompt = "A photo of a room, 4k photo, highly detailed"
101
  negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
102
 
103
  gen_images = pipe(
 
105
  negative_prompt=negative_prompt,
106
  image=image,
107
  num_inference_steps=30,
108
+ adapter_conditioning_scale=1,
109
+ guidance_scale=7.5,
110
+ ).images[0]
111
+ gen_images.save('out_mid.png')
112
+ ```
113
+ <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>