Spanicin commited on
Commit
e940a5b
·
verified ·
1 Parent(s): 4ff8f19

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -266
README.md DELETED
@@ -1,266 +0,0 @@
1
- <div align="center">
2
-
3
- <img src='https://user-images.githubusercontent.com/4397546/229094115-862c747e-7397-4b54-ba4a-bd368bfe2e0f.png' width='500px'/>
4
-
5
-
6
- <!--<h2> 😭 SadTalker: <span style="font-size:12px">Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation </span> </h2> -->
7
-
8
- <a href='https://arxiv.org/abs/2211.12194'><img src='https://img.shields.io/badge/ArXiv-PDF-red'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href='https://sadtalker.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Winfredy/SadTalker/blob/main/quick_demo.ipynb) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/vinthony/SadTalker) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [![sd webui-colab](https://img.shields.io/badge/Automatic1111-Colab-green)](https://colab.research.google.com/github/camenduru/stable-diffusion-webui-colab/blob/main/video/stable/stable_diffusion_1_5_video_webui_colab.ipynb) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [![Replicate](https://replicate.com/cjwbw/sadtalker/badge)](https://replicate.com/cjwbw/sadtalker)
9
-
10
- <div>
11
- <a target='_blank'>Wenxuan Zhang <sup>*,1,2</sup> </a>&emsp;
12
- <a href='https://vinthony.github.io/' target='_blank'>Xiaodong Cun <sup>*,2</a>&emsp;
13
- <a href='https://xuanwangvc.github.io/' target='_blank'>Xuan Wang <sup>3</sup></a>&emsp;
14
- <a href='https://yzhang2016.github.io/' target='_blank'>Yong Zhang <sup>2</sup></a>&emsp;
15
- <a href='https://xishen0220.github.io/' target='_blank'>Xi Shen <sup>2</sup></a>&emsp; </br>
16
- <a href='https://yuguo-xjtu.github.io/' target='_blank'>Yu Guo<sup>1</sup> </a>&emsp;
17
- <a href='https://scholar.google.com/citations?hl=zh-CN&user=4oXBp9UAAAAJ' target='_blank'>Ying Shan <sup>2</sup> </a>&emsp;
18
- <a target='_blank'>Fei Wang <sup>1</sup> </a>&emsp;
19
- </div>
20
- <br>
21
- <div>
22
- <sup>1</sup> Xi'an Jiaotong University &emsp; <sup>2</sup> Tencent AI Lab &emsp; <sup>3</sup> Ant Group &emsp;
23
- </div>
24
- <br>
25
- <i><strong><a href='https://arxiv.org/abs/2211.12194' target='_blank'>CVPR 2023</a></strong></i>
26
- <br>
27
- <br>
28
-
29
-
30
-
31
-
32
-
33
- ![sadtalker](https://user-images.githubusercontent.com/4397546/222490039-b1f6156b-bf00-405b-9fda-0c9a9156f991.gif)
34
-
35
- <b>TL;DR: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; single portrait image 🙎‍♂️ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; audio 🎤 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; = &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; talking head video 🎞.</b>
36
-
37
- <br>
38
-
39
- </div>
40
-
41
-
42
-
43
- ## 🔥 Highlight
44
-
45
- - 🔥 The extension of the [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is online. Checkout more details [here](docs/webui_extension.md).
46
-
47
- https://user-images.githubusercontent.com/4397546/231495639-5d4bb925-ea64-4a36-a519-6389917dac29.mp4
48
-
49
- - 🔥 `full image mode` is online! checkout [here](https://github.com/Winfredy/SadTalker#full-bodyimage-generation) for more details.
50
-
51
- | still+enhancer in v0.0.1 | still + enhancer in v0.0.2 | [input image @bagbag1815](https://twitter.com/bagbag1815/status/1642754319094108161) |
52
- |:--------------------: |:--------------------: | :----: |
53
- | <video src="https://user-images.githubusercontent.com/48216707/229484996-5d7be64f-2553-4c9e-a452-c5cf0b8ebafe.mp4" type="video/mp4"> </video> | <video src="https://user-images.githubusercontent.com/4397546/230717873-355b7bf3-d3de-49f9-a439-9220e623fce7.mp4" type="video/mp4"> </video> | <img src='./examples/source_image/full_body_2.png' width='380'>
54
-
55
- - 🔥 Several new mode, eg, `still mode`, `reference mode`, `resize mode` are online for better and custom applications.
56
-
57
- - 🔥 Happy to see more community demos at [bilibili](https://search.bilibili.com/all?keyword=sadtalker&from_source=webtop_search&spm_id_from=333.1007&search_source=3
58
- ), [Youtube](https://www.youtube.com/results?search_query=sadtalker&sp=CAM%253D) and [twitter #sadtalker](https://twitter.com/search?q=%23sadtalker&src=typed_query).
59
-
60
- ## 📋 Changelog (Previous changelog can be founded [here](docs/changlelog.md))
61
-
62
- - __[2023.04.15]__: Adding automatic1111 colab by @camenduru, thanks for this awesome colab: [![sd webui-colab](https://img.shields.io/badge/Automatic1111-Colab-green)](https://colab.research.google.com/github/camenduru/stable-diffusion-webui-colab/blob/main/video/stable/stable_diffusion_1_5_video_webui_colab.ipynb).
63
-
64
- - __[2023.04.12]__: adding a more detailed sd-webui installation document, fixed reinstallation problem.
65
-
66
- - __[2023.04.12]__: Fixed the sd-webui safe issues becasue of the 3rd packages, optimize the output path in `sd-webui-extension`.
67
-
68
- - __[2023.04.08]__: ❗️❗️❗️ In v0.0.2, we add a logo watermark to the generated video to prevent abusing since it is very realistic.
69
-
70
- - __[2023.04.08]__: v0.0.2, full image animation, adding baidu driver for download checkpoints. Optimizing the logic about enhancer.
71
-
72
-
73
- ## 🚧 TODO
74
-
75
- <details><summary> Previous TODOs </summary>
76
-
77
- - [x] Generating 2D face from a single Image.
78
- - [x] Generating 3D face from Audio.
79
- - [x] Generating 4D free-view talking examples from audio and a single image.
80
- - [x] Gradio/Colab Demo.
81
- - [x] Full body/image Generation.
82
- - [x] integrade with stable-diffusion-web-ui. (stay tunning!)
83
- </details>
84
-
85
-
86
- - [ ] Audio-driven Anime Avatar.
87
- - [ ] training code of each componments.
88
-
89
-
90
- ## If you have any problem, please view our [FAQ](docs/FAQ.md) before opening an issue.
91
-
92
- ## ⚙️ 1. Installation.
93
-
94
- Tutorials from communities: [中文windows教程](https://www.bilibili.com/video/BV1Dc411W7V6/) | [日本語コース](https://br-d.fanbox.cc/posts/5685086?utm_campaign=manage_post_page&utm_medium=share&utm_source=twitter)
95
-
96
- ### Linux:
97
-
98
- 1. Installing [anaconda](https://www.anaconda.com/), python and git.
99
-
100
- 2. Creating the env and install the requirements.
101
- ```bash
102
- git clone https://github.com/Winfredy/SadTalker.git
103
-
104
- cd SadTalker
105
-
106
- conda create -n sadtalker python=3.8
107
-
108
- conda activate sadtalker
109
-
110
- pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
111
-
112
- conda install ffmpeg
113
-
114
- pip install -r requirements.txt
115
-
116
- ### tts is optional for gradio demo.
117
- ### pip install TTS
118
-
119
- ```
120
- ### Windows ([中文windows教程](https://www.bilibili.com/video/BV1Dc411W7V6/)):
121
-
122
- 1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH".
123
- 2. Install [git](https://git-scm.com/download/win) manually (OR `scoop install git` via [scoop](https://scoop.sh/)).
124
- 3. Install `ffmpeg`, following [this instruction](https://www.wikihow.com/Install-FFmpeg-on-Windows) (OR using `scoop install ffmpeg` via [scoop](https://scoop.sh/)).
125
- 4. Download our SadTalker repository, for example by running `git clone https://github.com/Winfredy/SadTalker.git`.
126
- 5. Download the `checkpoint` and `gfpgan` [below↓](https://github.com/Winfredy/SadTalker#-2-download-trained-models).
127
- 5. Run `start.bat` from Windows Explorer as normal, non-administrator, user, a gradio WebUI demo will be started.
128
-
129
- ### Macbook:
130
-
131
- More tips about installnation on Macbook and the Docker file can be founded [here](docs/install.md)
132
-
133
- ## 📥 2. Download Trained Models.
134
-
135
- You can run the following script to put all the models in the right place.
136
-
137
- ```bash
138
- bash scripts/download_models.sh
139
- ```
140
-
141
- Other alternatives:
142
- > we also provide an offline patch (`gfpgan/`), thus, no model will be downloaded when generating.
143
-
144
- **Google Driver**: download our pre-trained model from [ this link (main checkpoints)](https://drive.google.com/drive/folders/1Wd88VDoLhVzYsQ30_qDVluQr_Xm46yHT?usp=sharing) and [ gfpgan (offline patch)](https://drive.google.com/file/d/19AIBsmfcHW6BRJmeqSFlG5fL445Xmsyi?usp=sharing)
145
-
146
- **Github Release Page**: download all the files from the [lastest github release page](https://github.com/Winfredy/SadTalker/releases), and then, put it in ./checkpoints.
147
-
148
- **百度云盘**: we provided the downloaded model in [checkpoints, 提取码: sadt.](https://pan.baidu.com/s/1nXuVNd0exUl37ISwWqbFGA?pwd=sadt) And [gfpgan, 提取码: sadt.](https://pan.baidu.com/s/1kb1BCPaLOWX1JJb9Czbn6w?pwd=sadt)
149
-
150
-
151
-
152
- <details><summary>Model Details</summary>
153
-
154
- The final folder will be shown as:
155
-
156
- <img width="331" alt="image" src="https://user-images.githubusercontent.com/4397546/232511411-4ca75cbf-a434-48c5-9ae0-9009e8316484.png">
157
-
158
-
159
- Model explains:
160
-
161
- | Model | Description
162
- | :--- | :----------
163
- |checkpoints/auido2exp_00300-model.pth | Pre-trained ExpNet in Sadtalker.
164
- |checkpoints/auido2pose_00140-model.pth | Pre-trained PoseVAE in Sadtalker.
165
- |checkpoints/mapping_00229-model.pth.tar | Pre-trained MappingNet in Sadtalker.
166
- |checkpoints/mapping_00109-model.pth.tar | Pre-trained MappingNet in Sadtalker.
167
- |checkpoints/facevid2vid_00189-model.pth.tar | Pre-trained face-vid2vid model from [the reappearance of face-vid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis).
168
- |checkpoints/epoch_20.pth | Pre-trained 3DMM extractor in [Deep3DFaceReconstruction](https://github.com/microsoft/Deep3DFaceReconstruction).
169
- |checkpoints/wav2lip.pth | Highly accurate lip-sync model in [Wav2lip](https://github.com/Rudrabha/Wav2Lip).
170
- |checkpoints/shape_predictor_68_face_landmarks.dat | Face landmark model used in [dilb](http://dlib.net/).
171
- |checkpoints/BFM | 3DMM library file.
172
- |checkpoints/hub | Face detection models used in [face alignment](https://github.com/1adrianb/face-alignment).
173
- |gfpgan/weights | Face detection and enhanced models used in `facexlib` and `gfpgan`.
174
-
175
-
176
- </details>
177
-
178
- ## 🔮 3. Quick Start ([Best Practice](docs/best_practice.md)).
179
-
180
- ### WebUI Demos:
181
-
182
- **Online**: [Huggingface](https://huggingface.co/spaces/vinthony/SadTalker) | [SDWebUI-Colab](https://colab.research.google.com/github/camenduru/stable-diffusion-webui-colab/blob/main/video/stable/stable_diffusion_1_5_video_webui_colab.ipynb) | [Colab](https://colab.research.google.com/github/Winfredy/SadTalker/blob/main/quick_demo.ipynb)
183
-
184
- **Local Autiomatic1111 stable-diffusion webui extension**: please refer to [Autiomatic1111 stable-diffusion webui docs](docs/webui_extension.md).
185
-
186
- **Local gradio demo**: Similar to our [hugging-face demo](https://huggingface.co/spaces/vinthony/SadTalker) can be run by:
187
-
188
- ```bash
189
- ## you need manually install TTS(https://github.com/coqui-ai/TTS) via `pip install tts` in advanced.
190
- python app.py
191
- ```
192
-
193
- **Local windows gradio demo**: just double click `webui.bat`, the requirements will be installed automatically.
194
-
195
-
196
- ### Manually usages:
197
-
198
- ##### Animating a portrait image from default config:
199
- ```bash
200
- python inference.py --driven_audio <audio.wav> \
201
- --source_image <video.mp4 or picture.png> \
202
- --enhancer gfpgan
203
- ```
204
- The results will be saved in `results/$SOME_TIMESTAMP/*.mp4`.
205
-
206
- ##### Full body/image Generation:
207
-
208
- Using `--still` to generate a natural full body video. You can add `enhancer` to improve the quality of the generated video.
209
-
210
- ```bash
211
- python inference.py --driven_audio <audio.wav> \
212
- --source_image <video.mp4 or picture.png> \
213
- --result_dir <a file to store results> \
214
- --still \
215
- --preprocess full \
216
- --enhancer gfpgan
217
- ```
218
-
219
- More examples and configuration and tips can be founded in the [ >>> best practice documents <<<](docs/best_practice.md).
220
-
221
- ## 🛎 Citation
222
-
223
- If you find our work useful in your research, please consider citing:
224
-
225
- ```bibtex
226
- @article{zhang2022sadtalker,
227
- title={SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation},
228
- author={Zhang, Wenxuan and Cun, Xiaodong and Wang, Xuan and Zhang, Yong and Shen, Xi and Guo, Yu and Shan, Ying and Wang, Fei},
229
- journal={arXiv preprint arXiv:2211.12194},
230
- year={2022}
231
- }
232
- ```
233
-
234
-
235
-
236
- ## 💗 Acknowledgements
237
-
238
- Facerender code borrows heavily from [zhanglonghao's reproduction of face-vid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis) and [PIRender](https://github.com/RenYurui/PIRender). We thank the authors for sharing their wonderful code. In training process, We also use the model from [Deep3DFaceReconstruction](https://github.com/microsoft/Deep3DFaceReconstruction) and [Wav2lip](https://github.com/Rudrabha/Wav2Lip). We thank for their wonderful work.
239
-
240
- See also these wonderful 3rd libraries we use:
241
-
242
- - **Face Utils**: https://github.com/xinntao/facexlib
243
- - **Face Enhancement**: https://github.com/TencentARC/GFPGAN
244
- - **Image/Video Enhancement**:https://github.com/xinntao/Real-ESRGAN
245
-
246
- ## 🥂 Extensions:
247
-
248
- - [SadTalker-Video-Lip-Sync](https://github.com/Zz-ww/SadTalker-Video-Lip-Sync) from [@Zz-ww](https://github.com/Zz-ww): SadTalker for Video Lip Editing
249
-
250
- ## 🥂 Related Works
251
- - [StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN (ECCV 2022)](https://github.com/FeiiYin/StyleHEAT)
252
- - [CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior (CVPR 2023)](https://github.com/Doubiiu/CodeTalker)
253
- - [VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild (SIGGRAPH Asia 2022)](https://github.com/vinthony/video-retalking)
254
- - [DPE: Disentanglement of Pose and Expression for General Video Portrait Editing (CVPR 2023)](https://github.com/Carlyx/DPE)
255
- - [3D GAN Inversion with Facial Symmetry Prior (CVPR 2023)](https://github.com/FeiiYin/SPI/)
256
- - [T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations (CVPR 2023)](https://github.com/Mael-zys/T2M-GPT)
257
-
258
- ## 📢 Disclaimer
259
-
260
- This is not an official product of Tencent. This repository can only be used for personal/research/non-commercial purposes.
261
-
262
- LOGO: color and font suggestion: [ChatGPT](ai.com), logo font:[Montserrat Alternates
263
- ](https://fonts.google.com/specimen/Montserrat+Alternates?preview.text=SadTalker&preview.text_type=custom&query=mont).
264
-
265
- All the copyright of the demo images and audio are from communities users or the geneartion from stable diffusion. Free free to contact us if you feel uncomfortable.
266
-