QiyuWu commited on
Commit
22d81d4
·
verified ·
1 Parent(s): 85ad46b

Upload 6 files

Browse files
Files changed (7) hide show
  1. .gitattributes +1 -0
  2. README.md +70 -13
  3. Video.gif +3 -0
  4. e4e.onnx +3 -0
  5. main.py +11 -0
  6. requirements.txt +12 -0
  7. start.py +67 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Video.gif filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,13 +1,70 @@
1
- ---
2
- title: StyleClip Demo
3
- emoji:
4
- colorFrom: red
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 4.40.0
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Web page project based on StyleClip
2
+
3
+ ### Web
4
+
5
+ The Web page project is available at https://wqyai.site/. Welcome to visit my website to enjoy the fun of editing pictures. :laughing:
6
+
7
+ ### Environment
8
+
9
+ To run the code, you can run the following commands to download the required packages.
10
+
11
+ ```
12
+ conda install --yes -c pytorch pytorch=1.7.1 torchvision
13
+ pip install ftfy regex tqdm gdown flask onnx onnxruntime
14
+ pip install dlib==19.6.1
15
+ ```
16
+
17
+ ### Pre-trained model
18
+
19
+ The repository uploaded the code used by the front and back end of the web page, but the pre-trained model at runtime was not uploaded. When you run the project code, you need to download the pre-trained model at the link below.
20
+
21
+ ##### Pre-trained models for specific editing effects:
22
+
23
+ | File name | Download link |
24
+ | ----------------- | ------------------------------------------------------------ |
25
+ | angry.pt | https://drive.google.com/uc?id=1g82HEH0jFDrcbCtn3M22gesWKfzWV_ma |
26
+ | surprised.pt | https://drive.google.com/uc?id=1F-mPrhO-UeWrV1QYMZck63R43aLtPChI |
27
+ | bowlcut.pt | https://drive.google.com/uc?id=1xwdxI2YCewSt05dEHgkpmmzoauPjEnnZ |
28
+ | curly_hair.pt | https://drive.google.com/uc?id=1xZ7fFB12Ci6rUbUfaHPpo44xUFzpWQ6M |
29
+ | purple_hair.pt | https://drive.google.com/uc?id=14H0CGXWxePrrKIYmZnDD2Ccs65EEww75 |
30
+ | beyonce.pt | https://drive.google.com/uc?id=1KJTc-h02LXs4zqCyo7pzCp0iWeO6T9fz |
31
+ | depp.pt | https://drive.google.com/uc?id=1FPiJkvFPG_y-bFanxLLP91wUKuy-l3IV |
32
+ | hilary_clinton.pt | https://drive.google.com/uc?id=1X7U2zj2lt0KFifIsTfOOzVZXqYyCWVll |
33
+ | taylor_swift.pt | https://drive.google.com/uc?id=10jHuHsKKJxuf3N0vgQbX_SMEQgFHDrZa |
34
+ | trump.pt | https://drive.google.com/uc?id=14v8D0uzy4tOyfBU3ca9T0AzTt3v-dNyh |
35
+ | zuckerberg.pt | https://drive.google.com/uc?id=1NjDcMUL8G-pO3i_9N6EPpQNXeMc3Ar1r |
36
+ | afro.pt | https://drive.google.com/uc?id=1i5vAqo4z0I-Yon3FNft_YZOq7ClWayQJ |
37
+
38
+ *Note: The above model should be downloaded and placed in the `Project/notebook/mapper/pretrained`*.
39
+
40
+ ##### Pre-trained models for encoder and generator:
41
+
42
+ | File name | Download link |
43
+ | -------------------------- | ------------------------------------------------------------ |
44
+ | e4e_ffhq_encode.pt | https://drive.google.com/file/d/1cUv_reLE6k3604or78EranS7XzuVMWeO/view?usp=sharing |
45
+ | stylegan2-ffhq-config-f.pt | https://drive.google.com/file/d/1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT/view?usp=sharing |
46
+
47
+ *Note: The above model should be downloaded and placed in the `Project/pretrained_models`*.
48
+
49
+ ### Run the code
50
+ ```python
51
+ python start.py
52
+ ```
53
+
54
+ **The code of the project is based on the following code on github.**
55
+
56
+ - [orpatashnik/StyleCLIP: Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral) (github.com)](https://github.com/orpatashnik/StyleCLIP)
57
+
58
+ - [pbaylies/stylegan-encoder: StyleGAN Encoder - converts real images to latent space (github.com)](https://github.com/pbaylies/stylegan-encoder)
59
+
60
+ - [omertov/encoder4editing: Official implementation of "Designing an Encoder for StyleGAN Image Manipulation" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02766 (github.com)](https://github.com/omertov/encoder4editing)
61
+
62
+
63
+
64
+ ### Examples
65
+
66
+ Here are some good examples of using the Web page.
67
+
68
+ ![](./static/img/Snipaste_2022-08-21_21-32-17.png)
69
+
70
+ ![](./static/img/Snipaste_2022-08-21_21-34-26.png)
Video.gif ADDED

Git LFS Details

  • SHA256: 8141d10c89149bb90566a3c26fabdd28ff03802d38e7e392ffbbd16b91598ffd
  • Pointer size: 133 Bytes
  • Size of remote file: 45.9 MB
e4e.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a664ff927aa5566733a187fd29e3c842ed46bed051afb4f33dc15242a5b49b25
3
+ size 1068794864
main.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from Project.aligned_image.aligned_images import align
2
+ from Project.scripts.inference import inference
3
+ from Project.notebook.out import out
4
+ def final(type):
5
+ # @param ['afro', 'angry', 'Beyonce', 'bobcut', 'bowlcut', 'curly hair', 'Hilary Clinton', 'Jhonny Depp', 'mohawk', 'purple hair', 'surprised', 'Taylor Swift', 'trump', 'Mark Zuckerberg']
6
+ align()
7
+ inference()
8
+ out(type)
9
+
10
+ if __name__=="__main__":
11
+ final('afro')
requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dlib==19.24.4
2
+ Flask==2.2.5
3
+ gradio==4.40.0
4
+ matplotlib==3.8.4
5
+ numpy==2.0.1
6
+ onnx==1.16.0
7
+ onnxruntime==1.18.1
8
+ Pillow==10.4.0
9
+ scipy==1.14.0
10
+ torch==2.2.2
11
+ torchvision==0.17.2
12
+ tqdm==4.66.4
start.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import cv2
3
+ import numpy as np
4
+ from Project.aligned_image.aligned_images import align
5
+ from Project.scripts.inference import inference
6
+ from Project.notebook.out import out
7
+
8
+ def final(image, style):
9
+ myimg = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
10
+ cv2.imwrite('static/img_in/in.jpg', myimg)
11
+ align()
12
+ inference()
13
+ out(style)
14
+ result = cv2.imread('static/img_out/inference_results/00000.jpg')
15
+ aligned = cv2.imread('static/img_aligned/in_01.png')
16
+ return cv2.cvtColor(result, cv2.COLOR_RGB2BGR), cv2.cvtColor(aligned, cv2.COLOR_RGB2BGR)
17
+
18
+ # 创建 Gradio 接口
19
+ style_options = {
20
+ "Emotion": {'Angry': 'angry', 'Surprised': 'surprised'},
21
+ "Celebrity": {'Beyonce': 'Beyonce', 'Hilary Clinton': 'Hilary_clinton', 'Johnny Depp': 'Jhonny Depp', 'Taylor Swift': 'Taylor Swift', 'Trump': 'trump'},
22
+ "Hair Style": {'Afro': 'afro', 'Bowlcut': 'bowlcut', 'Curly Hair': 'curly hair', 'Purple Hair': 'purple hair'}
23
+ }
24
+
25
+ # 定义更新风格选项的函数
26
+ def update_styles(style_type):
27
+ if not style_type:
28
+ return gr.Dropdown(choices=[])
29
+ return gr.Dropdown(choices=list(style_options[style_type].keys()))
30
+
31
+ # 创建 Gradio 界面
32
+ with gr.Blocks() as demo:
33
+ gr.Markdown("# Image Style Transfer")
34
+ gr.Markdown("### This app is based on styleclip. Choose a style type and a style from the dropdowns below.")
35
+ with gr.Row():
36
+ with gr.Column():
37
+ image_input = gr.Image(type="numpy", label="Upload Image")
38
+ style_type_dropdown = gr.Dropdown(choices=list(style_options.keys()), label="Style Type")
39
+ style_dropdown = gr.Dropdown(choices=["Angry", "Curly Hair", "Taylor Swift"], label="Style")
40
+ style_type_dropdown.change(fn=update_styles, inputs=style_type_dropdown, outputs=style_dropdown)
41
+ with gr.Row():
42
+ clear_button = gr.Button("Clear")
43
+ submit_button = gr.Button("Submit")
44
+ with gr.Column():
45
+ output_image = gr.Image(type="numpy", label="Result Image")
46
+ aligned_image = gr.Image(type="numpy", label="Aligned Image")
47
+
48
+ def on_submit(image, style_type, style):
49
+ style_value = style_options[style_type][style]
50
+ return final(image, style_value)
51
+
52
+ def on_clear():
53
+ return None, None, None, None, None
54
+
55
+ clear_button.click(fn=on_clear, inputs=[], outputs=[image_input, style_type_dropdown, style_dropdown, output_image, aligned_image])
56
+ submit_button.click(fn=on_submit, inputs=[image_input, style_type_dropdown, style_dropdown], outputs=[output_image, aligned_image])
57
+
58
+ examples = gr.Examples(
59
+ examples=[
60
+ ["static/img/example1.jpg", "Emotion", "Angry"],
61
+ ["static/img/example2.jpg", "Celebrity", "Taylor Swift"],
62
+ ["static/img/example3.jpg", "Hair Style", "Curly Hair"],
63
+ ],
64
+ inputs=[image_input, style_type_dropdown, style_dropdown],
65
+ )
66
+ # 启动 Gradio 应用
67
+ demo.launch()