viccpoes commited on
Commit
7d29b01
1 Parent(s): 21dddc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -8,11 +8,15 @@ tags:
8
  ---
9
 
10
  # Aesthetic ControlNet
 
11
 
12
- ControlNet is a method that can be used to condition diffusion models on arbitrary input features, such as image edges, segmentation maps, or human poses.
13
- For more information about ControlNet, please have a look at this [thread](https://twitter.com/krea_ai/status/1626672218477559809) or at the original [work](https://arxiv.org/pdf/2302.05543.pdf) by Lvmin Zhang and Maneesh Agrawala.
 
14
 
15
- Aesthetic ControlNet is a version of this technique that uses image features extracted using a [Canny edge detector](https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html) to guide a text-to-image diffusion model trained with aesthetic data.
 
 
16
 
17
  ![Example](./examples.jpg)
18
 
@@ -60,7 +64,4 @@ result.save("result.png")
60
  ```
61
 
62
  ## Misuse and Malicious Use
63
- The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
64
-
65
- ## Sout-outs
66
- Thanks to [@thibaudz](https://twitter.com/thibaudz) for creating a version of controlnet compatible with Stable Diffusion 2.1.
 
8
  ---
9
 
10
  # Aesthetic ControlNet
11
+ This model can produce highly aesthetic results from an input image and text prompt.
12
 
13
+ ControlNet is a method that can be used to condition diffusion models on arbitrary input features, such as image edges, segmentation maps, or human poses.
14
+
15
+ Aesthetic ControlNet is a version of this technique that uses image features extracted using a [Canny edge detector](https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html) and guides a text-to-image diffusion model trained on a large aesthetic dataset.
16
 
17
+ The base diffusion model is a fine-tuned version of Stable Diffusion 2.1 trained at a resolution of 640x640, and the control network comes from [thibaud/controlnet-sd21](https://huggingface.co/thibaud/controlnet-sd21) by [@thibaudz](https://twitter.com/thibaudz).
18
+
19
+ For more information about ControlNet, please have a look at this [thread](https://twitter.com/krea_ai/status/1626672218477559809) or at the original [work](https://arxiv.org/pdf/2302.05543.pdf) by Lvmin Zhang and Maneesh Agrawala.
20
 
21
  ![Example](./examples.jpg)
22
 
 
64
  ```
65
 
66
  ## Misuse and Malicious Use
67
+ The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.