update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@
|
|
4 |
|
5 |
I trained using [ControlNet](https://github.com/lllyasviel/ControlNet), which was proposed by lllyasviel, on a face dataset. By using facial landmarks as a condition, finer face control can be achieved.
|
6 |
|
7 |
-
Currently, I’m using Stable Diffusion 1.5 as the base model and dlib as the face landmark detector (those with the capability can replace it with a better one). The checkpoint
|
8 |
|
9 |
**Create conda environment:**
|
10 |
|
@@ -25,13 +25,13 @@ python gradio_landmark2image.py
|
|
25 |
|
26 |
To create a new face, input an image and extract the facial landmarks from it. These landmarks will be used as a reference to redraw the face while ensuring that the original features are retained.
|
27 |
|
28 |
-
![Generate face with the identical poses and expression](https://
|
29 |
|
30 |
## Control the facial expressions and poses of generated images
|
31 |
|
32 |
For the images we generated, we have the prompt and random seed used to generate them. While keeping the prompt and random seed, we can also edit the landmarks to modify the facial expressions and postures of the generated results.
|
33 |
|
34 |
-
![Controlthefacialexpressionsandposesofgeneratedimages](https://
|
35 |
|
36 |
## Credits
|
37 |
|
|
|
4 |
|
5 |
I trained using [ControlNet](https://github.com/lllyasviel/ControlNet), which was proposed by lllyasviel, on a face dataset. By using facial landmarks as a condition, finer face control can be achieved.
|
6 |
|
7 |
+
Currently, I’m using Stable Diffusion 1.5 as the base model and dlib as the face landmark detector (those with the capability can replace it with a better one). The checkpoint can be found at "models" folder.
|
8 |
|
9 |
**Create conda environment:**
|
10 |
|
|
|
25 |
|
26 |
To create a new face, input an image and extract the facial landmarks from it. These landmarks will be used as a reference to redraw the face while ensuring that the original features are retained.
|
27 |
|
28 |
+
![Generate face with the identical poses and expression](https://raw.githubusercontent.com/Georgefwt/Face-Landmark-ControlNet/master/assets/Generatefacewiththeidenticalposesandexpression.png)
|
29 |
|
30 |
## Control the facial expressions and poses of generated images
|
31 |
|
32 |
For the images we generated, we have the prompt and random seed used to generate them. While keeping the prompt and random seed, we can also edit the landmarks to modify the facial expressions and postures of the generated results.
|
33 |
|
34 |
+
![Controlthefacialexpressionsandposesofgeneratedimages](https://raw.githubusercontent.com/Georgefwt/Face-Landmark-ControlNet/master/assets/Controlthefacialexpressionsandposesofgeneratedimages.png)
|
35 |
|
36 |
## Credits
|
37 |
|