Update Readme
Browse files- .gitattributes +1 -0
- README.md +397 -0
- demo/clone_voice.png +0 -0
- demo/cuda.png +0 -0
- demo/demo.gif +3 -0
- demo/demo.png +0 -0
- demo/demo1.png +0 -0
- demo/hf_aggrement.png +0 -0
- demo/input-video.png +0 -0
- demo/install.png +0 -0
- demo/install2.png +0 -0
- demo/keyframes-manager.png +0 -0
- demo/translate.png +0 -0
- demo/translate_info.png +0 -0
- demo/visual_studio_1.png +0 -0
- demo/visual_studio_2.png +0 -0
- demo/voice_files.png +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
demo/demo.gif filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,397 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 🔉👄 Wav2Lip STUDIO
|
2 |
+
|
3 |
+
## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">简体中文</a></b></div>
|
4 |
+
|
5 |
+
<img src="https://user-images.githubusercontent.com/800903/258130805-26d9732f-4d33-4c7e-974e-7af2f1261768.gif" width="100%">
|
6 |
+
|
7 |
+
https://user-images.githubusercontent.com/800903/262435301-af205a91-30d7-43f2-afcc-05980d581fe0.mp4
|
8 |
+
|
9 |
+
## 💡 Description
|
10 |
+
This repository contains a Wav2Lip Studio Standalone Version.
|
11 |
+
|
12 |
+
It's an all-in-one solution: just choose a video and a speech file (wav or mp3), and the tools will generate a lip-sync video, faceswap, voice clone, and translate video with voice clone (HeyGen like).
|
13 |
+
It improves the quality of the lip-sync videos generated by the [Wav2Lip tool](https://github.com/Rudrabha/Wav2Lip) by applying specific post-processing techniques.
|
14 |
+
|
15 |
+
![Illustration](demo/demo.png)
|
16 |
+
![Illustration](demo/demo1.png)
|
17 |
+
|
18 |
+
## 📖 Quick Index
|
19 |
+
* [🚀 Updates](#-updates)
|
20 |
+
* [🔗 Requirements](#-requirements)
|
21 |
+
* [💻 Installation](#-installation)
|
22 |
+
* [🐍 Tutorial](#-tutorial)
|
23 |
+
* [🐍 Usage](#-usage)
|
24 |
+
* [👄 Keyframes Manager](#-keyframes-manager)
|
25 |
+
* [👄 Input Video](#-input-video)
|
26 |
+
* [📺 Examples](#-examples)
|
27 |
+
* [📖 Behind the scenes](#-behind-the-scenes)
|
28 |
+
* [💪 Quality tips](#-quality-tips)
|
29 |
+
* [⚠️Noted Constraints](#-noted-constraints)
|
30 |
+
* [📝 To do](#-to-do)
|
31 |
+
* [😎 Contributing](#-contributing)
|
32 |
+
* [🙏 Appreciation](#-appreciation)
|
33 |
+
* [📝 Citation](#-citation)
|
34 |
+
* [📜 License](#-license)
|
35 |
+
* [☕ Support Wav2lip Studio](#-support-wav2lip-studio)
|
36 |
+
|
37 |
+
## 🚀 Updates
|
38 |
+
**2024.02.09 Spped Up Update (Standalone version only)**
|
39 |
+
- 👬 Clone voice: Add controls to manage the voice clone (See Usage section)
|
40 |
+
- 🎏 translate video: Add features to translate panel to manage translation (See Usage section)
|
41 |
+
- 📺 Add Trim feature: Add a feature to trim the video.
|
42 |
+
- 🔑 Automatic mask: Add a feature to automatically calculate the mask parameters (padding, dilate...). You can change parameters if needed.
|
43 |
+
- 🚀 Speed up processes : All processes are now faster, Analysis, Face Swap, Generation in High quality
|
44 |
+
|
45 |
+
**2024.01.20 Major Update (Standalone version only)**
|
46 |
+
- ♻ Manage project: Add a feature to manage multiple project
|
47 |
+
- 👪 Introduced multiple face swap: Can now Swap multiple face in one shot (See Usage section)
|
48 |
+
- ⛔ Visible face restriction: Can now make whole process even if no face detected on frame!
|
49 |
+
- 📺 Video Size: works with high resolution video input, (test with 1980x1080, should works with 4K but slow)
|
50 |
+
- 🔑 Keyframe manager: Add a keyframe manager for better control of the video generation
|
51 |
+
- 🍪 coqui TTS integration: Remove bark integration, use coqui TTS instead (See Usage section)
|
52 |
+
- 💬 Conversation: Add a conversation feature with multiple person (See Usage section)
|
53 |
+
- 🔈 Record your own voice: Add a feature to record your own voice (See Usage section)
|
54 |
+
- 👬 Clone voice: Add a feature to clone voice from video (See Usage section)
|
55 |
+
- 🎏 translate video: Add a feature to translate video with voice clone (See Usage section)
|
56 |
+
- 🔉 Volume amplifier for wav2lip: Add a feature to amplify the volume of the wav2lip output (See Usage section)
|
57 |
+
- 🕡 Add delay before sound speech start
|
58 |
+
- 🚀 Speed up process: Speed up the process
|
59 |
+
|
60 |
+
**2023.09.13**
|
61 |
+
- 👪 Introduced face swap: facefusion integration (See Usage section) **this feature is under experimental**.
|
62 |
+
|
63 |
+
**2023.08.22**
|
64 |
+
- 👄 Introduced [bark](https://github.com/suno-ai/bark/) (See Usage section), **this feature is under experimental**.
|
65 |
+
|
66 |
+
**2023.08.20**
|
67 |
+
- 🚢 Introduced the GFPGAN model as an option.
|
68 |
+
- ▶ Added the feature to resume generation.
|
69 |
+
- 📏 Optimized to release memory post-generation.
|
70 |
+
|
71 |
+
**2023.08.17**
|
72 |
+
- 🐛 Fixed purple lips bug
|
73 |
+
|
74 |
+
**2023.08.16**
|
75 |
+
- ⚡ Added Wav2lip and enhanced video output, with the option to download the one that's best for you, likely the "generated video".
|
76 |
+
- 🚢 Updated User Interface: Introduced control over CodeFormer Fidelity.
|
77 |
+
- 👄 Removed image as input, [SadTalker](https://github.com/OpenTalker/SadTalker) is better suited for this.
|
78 |
+
- 🐛 Fixed a bug regarding the discrepancy between input and output video that incorrectly positioned the mask.
|
79 |
+
- 💪 Refined the quality process for greater efficiency.
|
80 |
+
- 🚫 Interruption will now generate videos if the process creates frames
|
81 |
+
|
82 |
+
**2023.08.13**
|
83 |
+
- ⚡ Speed-up computation
|
84 |
+
- 🚢 Change User Interface : Add controls on hidden parameters
|
85 |
+
- 👄 Only Track mouth if needed
|
86 |
+
- 📰 Control debug
|
87 |
+
- 🐛 Fix resize factor bug
|
88 |
+
|
89 |
+
## 🔗 Requirements
|
90 |
+
|
91 |
+
- FFmpeg : download it from the [official FFmpeg site](https://ffmpeg.org/download.html). Follow the instructions appropriate for your operating system, note ffmpeg have to be accessible from the command line.
|
92 |
+
- Make sure ffmpeg is in your PATH environment variable. If not, add it to your PATH environment variable.
|
93 |
+
1. pyannote.audio:You need to agree to share your contact information to access pyannote models.
|
94 |
+
To do so, go to both link:
|
95 |
+
- [pyannote diarization-3.1 huggingface repository](https://huggingface.co/pyannote/speaker-diarization-3.1)
|
96 |
+
- [pyannote segmentation-3.0 huggingface repository](https://huggingface.co/pyannote/segmentation-3.0)
|
97 |
+
|
98 |
+
set each field and click "Agree and access repository"
|
99 |
+
![Illustration](demo/hf_aggrement.png)
|
100 |
+
|
101 |
+
2. Create an access token to Huggingface:
|
102 |
+
1. Connect with your account
|
103 |
+
2. go to [access tokens](https://huggingface.co/settings/token) in settings
|
104 |
+
3. create a new token in read mode
|
105 |
+
4. copy the token
|
106 |
+
5. paste it in the file api_keys.json
|
107 |
+
```json
|
108 |
+
{
|
109 |
+
"huggingface_token": "your token"
|
110 |
+
}
|
111 |
+
```
|
112 |
+
|
113 |
+
## 💻 Installation
|
114 |
+
1. Install [python 3.10.11](https://www.python.org/downloads/release/python-31011/)
|
115 |
+
2. Install [git](https://git-scm.com/downloads)
|
116 |
+
3. Check ffmpeg, python, cuda and git installation
|
117 |
+
```bash
|
118 |
+
python --version
|
119 |
+
git --version
|
120 |
+
ffmpeg -version
|
121 |
+
nvcc --version (only if you have a Nvidia GPU and not MacOS)
|
122 |
+
```
|
123 |
+
Must return something like
|
124 |
+
```bash
|
125 |
+
Python 3.10.11
|
126 |
+
git version 2.35.1.windows.2
|
127 |
+
ffmpeg version N-110509-g722ff74055-20230506 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 12.2.0 (crosstool-NG 1.25.0.152_89671bf) bla bla bla...
|
128 |
+
nvcc: NVIDIA (R) Cuda compiler driver
|
129 |
+
Copyright (c) 2005-2022 NVIDIA Corporation
|
130 |
+
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
|
131 |
+
Cuda compilation tools, release 11.8, V11.8.89
|
132 |
+
Build cuda_11.8.r11.8/compiler.31833905_0
|
133 |
+
```
|
134 |
+
|
135 |
+
# Windows Users
|
136 |
+
1. Install [Cuda 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive) if not ever done.
|
137 |
+
![Illustration](demo/cuda.png)
|
138 |
+
2. Install [Visual Studio](https://visualstudio.microsoft.com/fr/downloads/). During the install, make sure to include the Python and C++ packages in visual studio installer.
|
139 |
+
![Illustration](demo/visual_studio_1.png)
|
140 |
+
![Illustration](demo/visual_studio_2.png)
|
141 |
+
3. if you have multiple Python version on your computer edit launch.py and change the following line:
|
142 |
+
```bash
|
143 |
+
REM set PYTHON="your python.exe path"
|
144 |
+
```
|
145 |
+
```bash
|
146 |
+
set PYTHON="your python.exe path"
|
147 |
+
```
|
148 |
+
4. double click on wav2lip-studio.bat, that will install the requirements and download the models
|
149 |
+
|
150 |
+
# MACOS Users
|
151 |
+
|
152 |
+
1. Install python 3.9
|
153 |
+
```
|
154 |
+
brew update
|
155 |
+
brew install python@3.9
|
156 |
+
brew install git-lfs
|
157 |
+
git-lfs install
|
158 |
+
```
|
159 |
+
2. Install environnement and requirements
|
160 |
+
|
161 |
+
```
|
162 |
+
cd /YourWav2lipStudioFolder
|
163 |
+
/opt/homebrew/bin/python3.9 -m venv venv
|
164 |
+
./venv/bin/python3.9 -m pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2
|
165 |
+
./venv/bin/python3.9 -m pip install -r requirements.txt
|
166 |
+
./venv/bin/python3.9 -m pip install transformers==4.33.2
|
167 |
+
./venv/bin/python3.9 -m pip install numpy==1.24.4
|
168 |
+
```
|
169 |
+
|
170 |
+
3. if It doesn't works or too long on pip install -r requirements.txt
|
171 |
+
|
172 |
+
```
|
173 |
+
./venv/bin/python3.9 -m pip install inaSpeechSegmenter
|
174 |
+
./venv/bin/python3.9 -m pip install gradio==4.14.0 imutils==0.5.4 numpy opencv-python==4.8.0.76 scipy==1.11.2 requests==2.28.1 pillow==9.3.0 librosa==0.10.0 opencv-contrib-python==4.8.0.76 huggingface_hub==0.20.2 tqdm==4.66.1 cutlet==0.3.0 numba==0.57.1 imageio_ffmpeg==0.4.9 insightface==0.7.3 unidic==1.1.0 onnx==1.14.1 onnxruntime==1.16.0 psutil==5.9.5 lpips==0.1.4 GitPython==3.1.36 facexlib==0.3.0 gfpgan==1.3.8 gdown==4.7.1 pyannote.audio==3.1.1 TTS==0.21.2 openai-whisper==20231117 resampy==0.4.0 scenedetect==0.6.2 uvicorn==0.23.2 starlette==0.35.1 fastapi==0.109.0 fugashii
|
175 |
+
./venv/bin/python3.9 -m pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2
|
176 |
+
./venv/bin/python3.9 -m pip install transformers==4.33.2
|
177 |
+
./venv/bin/python3.9 -m pip install numpy==1.24.4
|
178 |
+
```
|
179 |
+
|
180 |
+
4. Install models
|
181 |
+
```
|
182 |
+
git clone https://huggingface.co/numz/wav2lip_studio-0.2 models
|
183 |
+
```
|
184 |
+
5. Launch UI
|
185 |
+
```
|
186 |
+
./venv/bin/python3.9 wav2lip_studio.py
|
187 |
+
```
|
188 |
+
|
189 |
+
## Tutorial
|
190 |
+
- [FR version](https://youtu.be/43Q8YASkcUA)
|
191 |
+
- [EN Version](https://youtu.be/B84A5alpPDc)
|
192 |
+
|
193 |
+
## 🐍 Usage
|
194 |
+
##PARAMETERS
|
195 |
+
1. Enter project name and click enter.
|
196 |
+
2. Choose a video (avi or mp4 format). Note avi file will not appear in Video input but process will works.
|
197 |
+
3. Face Swap (take times so be patient):
|
198 |
+
- **Face Swap**: choose the image of the faces you want to swap with the face in the video (multiple faces are now available), left face is id 0.
|
199 |
+
4. **Resolution Divide Factor**: The resolution of the video will be divided by this factor. The higher the factor, the faster the process, but the lower the resolution of the output video.
|
200 |
+
5. **Min Face Width Detection**: The minimum width of the face to detect. Allow to ignore little face in the video.
|
201 |
+
6. **Align Faces**: allows for straightening the head before sending it for Wav2Lip processing.
|
202 |
+
7. **Keyframes On Speaker Change**: Allows you to generate a keyframe when the speaker changes. This allows you to better control the video generation.
|
203 |
+
8. **Keyframes On scene Change**: Allows you to generate a keyframe when the scene changes. This allows you to better control the video generation.
|
204 |
+
9. When parameters above are set click on **Generate Keyframes**, See [Keyframes manager](#keyframes-manager) section for more details.
|
205 |
+
10. Audio, 3 options:
|
206 |
+
1. Put audio file in the "Speech" input. or record one with the "Record" button.
|
207 |
+
2. Generate Audio with the text to speech [coqui TTS](https://github.com/coqui-ai/TTS) integration.
|
208 |
+
1. Choose the language
|
209 |
+
2. Choose the Voice
|
210 |
+
3. Write your speech in the text area "Prompt" in text format or json format:
|
211 |
+
1. Text format:
|
212 |
+
```bash
|
213 |
+
Hello, my name is John. I am 25 years old.
|
214 |
+
```
|
215 |
+
2. Json format (you can ask chat GPT to generate discussion for you):
|
216 |
+
```bash
|
217 |
+
[
|
218 |
+
{
|
219 |
+
"start": 0.0,
|
220 |
+
"end": 3.0,
|
221 |
+
"text": "Hello, my name is John. I am 25 years old.",
|
222 |
+
"speaker": "arnold"
|
223 |
+
},
|
224 |
+
{
|
225 |
+
"start": 3.0,
|
226 |
+
"end": 4.0,
|
227 |
+
"text": "Ho really ?",
|
228 |
+
"speaker": "female_01"
|
229 |
+
},
|
230 |
+
...
|
231 |
+
]
|
232 |
+
```
|
233 |
+
3. Input Video: Allow to use audio from the input video, voices cloning and translation. see [Input Video](#input-video) section for more details.
|
234 |
+
11. **Video Quality**:
|
235 |
+
- **Low**: Original Wav2Lip quality, fast but not very good.
|
236 |
+
- **Medium**: Better quality by apply post processing on the mouth, slower.
|
237 |
+
- **High**: Better quality by apply post processing and upscale the mouth quality, slower.
|
238 |
+
12. **Wav2lip Checkpoint**: Choose beetwen 2 wav2lip model:
|
239 |
+
- **Wav2lip**: Original Wav2Lip model, fast but not very good.
|
240 |
+
- **Wav2lip GAN**: Better quality by apply post processing on the mouth, slower.
|
241 |
+
13. **Face Restoration Model**: Choose beetwen 2 face restoration model:
|
242 |
+
- **Code Former**:
|
243 |
+
- A value of 0 offers higher quality but may significantly alter the person's facial appearance and cause noticeable flickering between frames.
|
244 |
+
- A value of 1 provides lower quality but maintains the person's face more consistently and reduces frame flickering.
|
245 |
+
- Using a value below 0.5 is not advised. Adjust this setting to achieve optimal results. Starting with a value of 0.75 is recommended.
|
246 |
+
- **GFPGAN**: Usually better quality.
|
247 |
+
14. **Volume Amplifier**: Not amplify the volume of the output audio but allows you to amplify the volume of the audio when sending it to Wav2Lip. This allows you to better control on lips movement.
|
248 |
+
|
249 |
+
## KEYFRAMES MANAGER
|
250 |
+
![Illustration](demo/keyframes-manager.png)
|
251 |
+
|
252 |
+
###Global parameters:
|
253 |
+
1. **Only show Speaker Face**: This option allows you to only focus the face of the speaker, the other faces will be hidden.
|
254 |
+
2. **Frame Number**: A slider that allows you to move between the frames of the video.
|
255 |
+
3. **Add Keyframe**: Allows you to add a keyframe at the current Frame Number.
|
256 |
+
4. **Remove Keyframe**: Allows you to remove a keyframe at the current Frame Number.
|
257 |
+
5. **Keyframes**: A list of all the keyframes.
|
258 |
+
|
259 |
+
###For each face on keyframe:
|
260 |
+
1. **Face Id**: List of all the faces in current keyframe.
|
261 |
+
2. **translation info**: If there is a translation associate to the project it will be shown here, you can see the speaker, and then it can help to select the good speaker on this keyframe.
|
262 |
+
3. **Speaker**: Checkbox to set the speaker on the current Face Id of the current keyframe.
|
263 |
+
4. **Face Swap Id**: Checkbox to set the face swap id of the current keyframe on the current Face Id.
|
264 |
+
5. **Automatic Mask**: Default True, if False, you can draw the mask manually.
|
265 |
+
6. **Mouth Mask Dilate**: This will dilate the mouth mask to cover more area around the mouth. depends on the mouth size.
|
266 |
+
7. **Face Mask Erode**: This will erode the face mask to remove some area around the face. depends on the face size.
|
267 |
+
8. **Mask Blur**: This will blur the mask to make it more smooth, try to keep it under or equal to **Mouth Mask Dilate**.
|
268 |
+
9. **Padding sliders**: This will add padding to the head to avoid cuting the head in the video.
|
269 |
+
|
270 |
+
When you configure a keyframes, it's influence goes until next keyframe so intermediate frames will be generated with the same configuration.
|
271 |
+
Note that this configuration can't be seen in UI for intermediate frames.
|
272 |
+
## Input Video
|
273 |
+
![Illustration](demo/clone_voice.png)
|
274 |
+
|
275 |
+
If no sound in translated audio, will take the audio from the input video. Can be useful if you have a bad lipsync on the input video.
|
276 |
+
|
277 |
+
###Clone Voices:
|
278 |
+
1. **Number Of Speakers**: The number of speakers in the video. Help clone to know how many voices to clone.
|
279 |
+
2. **Remove Background Sound Before Clone**: Remove noise/music from the background sound before clone.
|
280 |
+
3. **Clone Voices**: Clone voices from the input video.
|
281 |
+
4. **Voices**: List of the cloned voices. You can rename voice to identify them in translation.
|
282 |
+
For each voices you can :
|
283 |
+
- **Play**: Listen the voice.
|
284 |
+
- **regen sentence**: Regenerate the sentence sample.
|
285 |
+
- **save voice**: Save the voice to your voices library.
|
286 |
+
5. **Voices Files**: List of voices files used by models to create the cloned voices. You can modify the voices files to change the cloned voices. Make sure to have only one voice per file, no background sound and no music.
|
287 |
+
You can listen the voices files by clicking on the play button. and change the speaker name to identify the voice.
|
288 |
+
|
289 |
+
![Illustration](demo/voice_files.png)
|
290 |
+
###Translation:
|
291 |
+
Translation panel is now linked to the cloned voices panel because translation will try to identify the speaker to translate the voice.
|
292 |
+
![Illustration](demo/translate.png)
|
293 |
+
1. **Language**: Target language to translate the input video.
|
294 |
+
2. **Whisper Model**: List of the whisper models to use for the translation, choose beetwen 5 models, the higher the model the better the quality but the slower the process.
|
295 |
+
3. **Translate**: Translate the input video to the selected language.
|
296 |
+
4. **Translation**: The translated text.
|
297 |
+
5. **Translated Audio**: The translated audio.
|
298 |
+
6. **Convert To Audio**: Convert the translated text to translated audio.
|
299 |
+
|
300 |
+
For each segment of the translated text, you can :
|
301 |
+
- Modify the translated text
|
302 |
+
- Modify the time start and end of the segment.
|
303 |
+
- Change the speaker of the segment.
|
304 |
+
- listen to the original audio by click on the play button.
|
305 |
+
- listen to the translated audio by click on the red ideogram button.
|
306 |
+
- Generate the translation for this segment by click on the recycle button.
|
307 |
+
- Delete the segment by click on the trash button.
|
308 |
+
- Add a new segment under this one by click on the arrow down button.
|
309 |
+
|
310 |
+
## 📺 Examples
|
311 |
+
|
312 |
+
https://user-images.githubusercontent.com/800903/262439441-bb9d888a-d33e-4246-9f0a-1ddeac062d35.mp4
|
313 |
+
|
314 |
+
https://user-images.githubusercontent.com/800903/262442794-61b1e32f-3f87-4b36-98d6-f711822bdb1e.mp4
|
315 |
+
|
316 |
+
https://user-images.githubusercontent.com/800903/262449305-901086a3-22cb-42d2-b5be-a5f38db4549a.mp4
|
317 |
+
|
318 |
+
https://user-images.githubusercontent.com/800903/267808494-300f8cc3-9136-4810-86e2-92f2114a5f9a.mp4
|
319 |
+
|
320 |
+
## 📖 Behind the scenes
|
321 |
+
|
322 |
+
This extension operates in several stages to improve the quality of Wav2Lip-generated videos:
|
323 |
+
|
324 |
+
1. **Generate face swap video**: The script first generates the face swap video if image is in "face Swap" field, this operation take times so be patient.
|
325 |
+
2. **Generate a Wav2lip video**: Then script generates a low-quality Wav2Lip video using the input video and audio.
|
326 |
+
3. **Video Quality Enhancement**: Create a high-quality video using the low-quality video by using the enhancer define by user.
|
327 |
+
4. **Mask Creation**: The script creates a mask around the mouth and tries to keep other facial motions like those of the cheeks and chin.
|
328 |
+
5. **Video Generation**: The script then takes the high-quality mouth image and overlays it onto the original image guided by the mouth mask.
|
329 |
+
|
330 |
+
## 💪 Quality tips
|
331 |
+
- Use a high quality video as input
|
332 |
+
- Use a video with a consistent frame rate. Occasionally, videos may exhibit unusual playback frame rates (not the standard 24, 25, 30, 60), which can lead to issues with the face mask.
|
333 |
+
- Use a high quality audio file as input, without background noise or music. Clean audio with a tool like [https://podcast.adobe.com/enhance](https://podcast.adobe.com/enhance).
|
334 |
+
- Dilate the mouth mask. This will help the model retain some facial motion and hide the original mouth.
|
335 |
+
- Mask Blur maximum twice the value of Mouth Mask Dilate. If you want to increase the blur, increase the value of Mouth Mask Dilate otherwise the mouth will be blurred and the underlying mouth could be visible.
|
336 |
+
- Upscaling can be good for improving result, particularly around the mouth area. However, it will extend the processing duration. Use this tutorial from Olivio Sarikas to upscale your video: [https://www.youtube.com/watch?v=3z4MKUqFEUk](https://www.youtube.com/watch?v=3z4MKUqFEUk). Ensure the denoising strength is set between 0.0 and 0.05, select the 'revAnimated' model, and use the batch mode. i'll create a tutorial for this soon.
|
337 |
+
|
338 |
+
## ⚠ Noted Constraints
|
339 |
+
- for speed up process try to keep resolution under 1000x1000px and upscaling after process.
|
340 |
+
- If the initial phase is excessively lengthy, consider using the "resize factor" to decrease the video's dimensions.
|
341 |
+
- While there's no strict size limit for videos, larger videos will require more processing time. It's advisable to employ the "resize factor" to minimize the video size and then upscale the video once processing is complete.
|
342 |
+
|
343 |
+
## know issues:
|
344 |
+
If you have issues to install insightface, follow this step:
|
345 |
+
- Download [insightface precompiled](https://github.com/Gourieff/Assets/raw/main/Insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl) and paste it in the root folder of Wav2lip-studio
|
346 |
+
- in terminal go to wav2lip-studio folder and type the following commands:
|
347 |
+
```
|
348 |
+
.\venv\Scripts\activate
|
349 |
+
python -m pip install -U pip
|
350 |
+
python -m pip install insightface-0.7.3-cp310-cp310-win_amd64.whl
|
351 |
+
```
|
352 |
+
Enjoy
|
353 |
+
|
354 |
+
## 📝 To do
|
355 |
+
- ✔️ Standalone version
|
356 |
+
- ✔️ Add a way to use a face swap image
|
357 |
+
- ��️ Add Possibility to use a video for audio input
|
358 |
+
- ✔️ Convert avi to mp4. Avi is not show in video input but process work fine
|
359 |
+
- [ ] ComfyUI intergration
|
360 |
+
|
361 |
+
## 😎 Contributing
|
362 |
+
|
363 |
+
We welcome contributions to this project. When submitting pull requests, please provide a detailed description of the changes. see [CONTRIBUTING](CONTRIBUTING.md) for more information.
|
364 |
+
|
365 |
+
## 🙏 Appreciation
|
366 |
+
- [Wav2Lip](https://github.com/Rudrabha/Wav2Lip)
|
367 |
+
- [CodeFormer](https://github.com/sczhou/CodeFormer)
|
368 |
+
- [Coqui TTS](https://github.com/coqui-ai/TTS)
|
369 |
+
- [facefusion](https://github.com/facefusion/facefusion)
|
370 |
+
- [Vocal Remover](https://github.com/tsurumeso/vocal-remover)
|
371 |
+
|
372 |
+
## ☕ Support Wav2lip Studio
|
373 |
+
|
374 |
+
this project is open-source effort that is free to use and modify. I rely on the support of users to keep this project going and help improve it. If you'd like to support me, you can make a donation on my Patreon page. Any contribution, large or small, is greatly appreciated!
|
375 |
+
|
376 |
+
Your support helps me cover the costs of development and maintenance, and allows me to allocate more time and resources to enhancing this project. Thank you for your support!
|
377 |
+
|
378 |
+
[patreon page](https://www.patreon.com/Wav2LipStudio)
|
379 |
+
|
380 |
+
## 📝 Citation
|
381 |
+
If you use this project in your own work, in articles, tutorials, or presentations, we encourage you to cite this project to acknowledge the efforts put into it.
|
382 |
+
|
383 |
+
To cite this project, please use the following BibTeX format:
|
384 |
+
|
385 |
+
```
|
386 |
+
@misc{wav2lip_uhq,
|
387 |
+
author = {numz},
|
388 |
+
title = {Wav2Lip UHQ},
|
389 |
+
year = {2023},
|
390 |
+
howpublished = {GitHub repository},
|
391 |
+
publisher = {numz},
|
392 |
+
url = {https://github.com/numz/sd-wav2lip-uhq}
|
393 |
+
}
|
394 |
+
```
|
395 |
+
|
396 |
+
## 📜 License
|
397 |
+
* The code in this repository is released under the MIT license as found in the [LICENSE file](LICENSE).
|
demo/clone_voice.png
ADDED
demo/cuda.png
ADDED
demo/demo.gif
ADDED
Git LFS Details
|
demo/demo.png
ADDED
demo/demo1.png
ADDED
demo/hf_aggrement.png
ADDED
demo/input-video.png
ADDED
demo/install.png
ADDED
demo/install2.png
ADDED
demo/keyframes-manager.png
ADDED
demo/translate.png
ADDED
demo/translate_info.png
ADDED
demo/visual_studio_1.png
ADDED
demo/visual_studio_2.png
ADDED
demo/voice_files.png
ADDED