Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# **Wav2Lip**: *Accurately Lip-syncing Videos In The Wild*
|
2 |
+
|
3 |
+
- slightly modified to run as a Silly Tavern extension by [Mozer](https://github.com/Mozer)
|
4 |
+
- Now it supports CPU and caching, giving 2x speed-up!
|
5 |
+
- Full info about this ST extension is here: https://github.com/Mozer/wav2lip_extension
|
6 |
+
|
7 |
+
|
8 |
+
|
9 |
+
## original Readme
|
10 |
+
|
11 |
+
Are you looking to integrate this into a product? We have a turn-key hosted API with new and improved lip-syncing models here: https://synclabs.so/
|
12 |
+
|
13 |
+
For any other commercial licenscing requests, please contact us at radrabha.m@research.iiit.ac.in or prajwal.k@research.iiit.ac.in.
|
14 |
+
|
15 |
+
This code is part of the paper: _A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild_ published at ACM Multimedia 2020.
|
16 |
+
|
17 |
+
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/a-lip-sync-expert-is-all-you-need-for-speech/lip-sync-on-lrs2)](https://paperswithcode.com/sota/lip-sync-on-lrs2?p=a-lip-sync-expert-is-all-you-need-for-speech)
|
18 |
+
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/a-lip-sync-expert-is-all-you-need-for-speech/lip-sync-on-lrs3)](https://paperswithcode.com/sota/lip-sync-on-lrs3?p=a-lip-sync-expert-is-all-you-need-for-speech)
|
19 |
+
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/a-lip-sync-expert-is-all-you-need-for-speech/lip-sync-on-lrw)](https://paperswithcode.com/sota/lip-sync-on-lrw?p=a-lip-sync-expert-is-all-you-need-for-speech)
|
20 |
+
|
21 |
+
|π Original Paper|π° Project Page|π Demo|β‘ Live Testing|π Colab Notebook
|
22 |
+
|:-:|:-:|:-:|:-:|:-:|
|
23 |
+
[Paper](http://arxiv.org/abs/2008.10010) | [Project Page](http://cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild/) | [Demo Video](https://youtu.be/0fXaDCZNOJc) | [Interactive Demo](https://bhaasha.iiit.ac.in/lipsync) | [Colab Notebook](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing) /[Updated Collab Notebook](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH)
|
24 |
+
|
25 |
+
![Logo](https://lfldehquopamazavycth.supabase.co/storage/v1/object/public/sync-public/wav2lip_img.gif?t=2023-10-13T23%3A27%3A03.763Z)
|
26 |
+
|
27 |
+
----------
|
28 |
+
**Highlights**
|
29 |
+
----------
|
30 |
+
- Weights of the visual quality disc has been updated in readme!
|
31 |
+
- Lip-sync videos to any target speech with high accuracy :100:. Try our [interactive demo](https://bhaasha.iiit.ac.in/lipsync).
|
32 |
+
- :sparkles: Works for any identity, voice, and language. Also works for CGI faces and synthetic voices.
|
33 |
+
- Complete training code, inference code, and pretrained models are available :boom:
|
34 |
+
- Or, quick-start with the Google Colab Notebook: [Link](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing). Checkpoints and samples are available in a Google Drive [folder](https://drive.google.com/drive/folders/1I-0dNLfFOSFwrfqjNa-SXuwaURHE5K4k?usp=sharing) as well. There is also a [tutorial video](https://www.youtube.com/watch?v=Ic0TBhfuOrA) on this, courtesy of [What Make Art](https://www.youtube.com/channel/UCmGXH-jy0o2CuhqtpxbaQgA). Also, thanks to [Eyal Gruss](https://eyalgruss.com), there is a more accessible [Google Colab notebook](https://j.mp/wav2lip) with more useful features. A tutorial collab notebook is present at this [link](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH).
|
35 |
+
- :fire: :fire: Several new, reliable evaluation benchmarks and metrics [[`evaluation/` folder of this repo]](https://github.com/Rudrabha/Wav2Lip/tree/master/evaluation) released. Instructions to calculate the metrics reported in the paper are also present.
|
36 |
+
|
37 |
+
--------
|
38 |
+
**Disclaimer**
|
39 |
+
--------
|
40 |
+
All results from this open-source code or our [demo website](https://bhaasha.iiit.ac.in/lipsync) should only be used for research/academic/personal purposes only. As the models are trained on the <a href="http://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html">LRS2 dataset</a>, any form of commercial use is strictly prohibited. For commercial requests please contact us directly!
|
41 |
+
|
42 |
+
Prerequisites
|
43 |
+
-------------
|
44 |
+
- `Python 3.6`
|
45 |
+
- ffmpeg: `sudo apt-get install ffmpeg`
|
46 |
+
- Install necessary packages using `pip install -r requirements.txt`. Alternatively, instructions for using a docker image is provided [here](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668). Have a look at [this comment](https://github.com/Rudrabha/Wav2Lip/issues/131#issuecomment-725478562) and comment on [the gist](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668) if you encounter any issues.
|
47 |
+
- Face detection [pre-trained model](https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth) should be downloaded to `face_detection/detection/sfd/s3fd.pth`. Alternative [link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/prajwal_k_research_iiit_ac_in/EZsy6qWuivtDnANIG73iHjIBjMSoojcIV0NULXV-yiuiIg?e=qTasa8) if the above does not work.
|
48 |
+
|
49 |
+
Getting the weights
|
50 |
+
----------
|
51 |
+
| Model | Description | Link to the model |
|
52 |
+
| :-------------: | :---------------: | :---------------: |
|
53 |
+
| Wav2Lip | Highly accurate lip-sync | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/Eb3LEzbfuKlJiR600lQWRxgBIY27JZg80f7V9jtMfbNDaQ?e=TBFBVW) |
|
54 |
+
| Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EdjI7bZlgApMqsVoEUUXpLsBxqXbn5z8VTmoxp55YNDcIA?e=n9ljGW) |
|
55 |
+
| Expert Discriminator | Weights of the expert discriminator | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQRvmiZg-HRAjvI6zqN9eTEBP74KefynCwPWVmF57l-AYA?e=ZRPHKP) |
|
56 |
+
| Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQVqH88dTm1HjlK11eNba5gBbn15WMS0B0EZbDBttqrqkg?e=ic0ljo) |
|
57 |
+
|
58 |
+
Lip-syncing videos using the pre-trained models (Inference)
|
59 |
+
-------
|
60 |
+
You can lip-sync any video to any audio:
|
61 |
+
```bash
|
62 |
+
python inference.py --checkpoint_path <ckpt> --face <video.mp4> --audio <an-audio-source>
|
63 |
+
```
|
64 |
+
The result is saved (by default) in `results/result_voice.mp4`. You can specify it as an argument, similar to several other available options. The audio source can be any file supported by `FFMPEG` containing audio data: `*.wav`, `*.mp3` or even a video file, from which the code will automatically extract the audio.
|
65 |
+
|
66 |
+
##### Tips for better results:
|
67 |
+
- Experiment with the `--pads` argument to adjust the detected face bounding box. Often leads to improved results. You might need to increase the bottom padding to include the chin region. E.g. `--pads 0 20 0 0`.
|
68 |
+
- If you see the mouth position dislocated or some weird artifacts such as two mouths, then it can be because of over-smoothing the face detections. Use the `--nosmooth` argument and give it another try.
|
69 |
+
- Experiment with the `--resize_factor` argument, to get a lower-resolution video. Why? The models are trained on faces that were at a lower resolution. You might get better, visually pleasing results for 720p videos than for 1080p videos (in many cases, the latter works well too).
|
70 |
+
- The Wav2Lip model without GAN usually needs more experimenting with the above two to get the most ideal results, and sometimes, can give you a better result as well.
|
71 |
+
|
72 |
+
Preparing LRS2 for training
|
73 |
+
----------
|
74 |
+
Our models are trained on LRS2. See [here](#training-on-datasets-other-than-lrs2) for a few suggestions regarding training on other datasets.
|
75 |
+
##### LRS2 dataset folder structure
|
76 |
+
|
77 |
+
```
|
78 |
+
data_root (mvlrs_v1)
|
79 |
+
βββ main, pretrain (we use only main folder in this work)
|
80 |
+
| βββ list of folders
|
81 |
+
| β βββ five-digit numbered video IDs ending with (.mp4)
|
82 |
+
```
|
83 |
+
|
84 |
+
Place the LRS2 filelists (train, val, test) `.txt` files in the `filelists/` folder.
|
85 |
+
|
86 |
+
##### Preprocess the dataset for fast training
|
87 |
+
|
88 |
+
```bash
|
89 |
+
python preprocess.py --data_root data_root/main --preprocessed_root lrs2_preprocessed/
|
90 |
+
```
|
91 |
+
Additional options like `batch_size` and the number of GPUs to use in parallel to use can also be set.
|
92 |
+
|
93 |
+
##### Preprocessed LRS2 folder structure
|
94 |
+
```
|
95 |
+
preprocessed_root (lrs2_preprocessed)
|
96 |
+
βββ list of folders
|
97 |
+
| βββ Folders with five-digit numbered video IDs
|
98 |
+
| β βββ *.jpg
|
99 |
+
| β βββ audio.wav
|
100 |
+
```
|
101 |
+
|
102 |
+
Train!
|
103 |
+
----------
|
104 |
+
There are two major steps: (i) Train the expert lip-sync discriminator, (ii) Train the Wav2Lip model(s).
|
105 |
+
|
106 |
+
##### Training the expert discriminator
|
107 |
+
You can download [the pre-trained weights](#getting-the-weights) if you want to skip this step. To train it:
|
108 |
+
```bash
|
109 |
+
python color_syncnet_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints>
|
110 |
+
```
|
111 |
+
##### Training the Wav2Lip models
|
112 |
+
You can either train the model without the additional visual quality discriminator (< 1 day of training) or use the discriminator (~2 days). For the former, run:
|
113 |
+
```bash
|
114 |
+
python wav2lip_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints> --syncnet_checkpoint_path <path_to_expert_disc_checkpoint>
|
115 |
+
```
|
116 |
+
|
117 |
+
To train with the visual quality discriminator, you should run `hq_wav2lip_train.py` instead. The arguments for both files are similar. In both cases, you can resume training as well. Look at `python wav2lip_train.py --help` for more details. You can also set additional less commonly-used hyper-parameters at the bottom of the `hparams.py` file.
|
118 |
+
|
119 |
+
Training on datasets other than LRS2
|
120 |
+
------------------------------------
|
121 |
+
Training on other datasets might require modifications to the code. Please read the following before you raise an issue:
|
122 |
+
|
123 |
+
- You might not get good results by training/fine-tuning on a few minutes of a single speaker. This is a separate research problem, to which we do not have a solution yet. Thus, we would most likely not be able to resolve your issue.
|
124 |
+
- You must train the expert discriminator for your own dataset before training Wav2Lip.
|
125 |
+
- If it is your own dataset downloaded from the web, in most cases, needs to be sync-corrected.
|
126 |
+
- Be mindful of the FPS of the videos of your dataset. Changes to FPS would need significant code changes.
|
127 |
+
- The expert discriminator's eval loss should go down to ~0.25 and the Wav2Lip eval sync loss should go down to ~0.2 to get good results.
|
128 |
+
|
129 |
+
When raising an issue on this topic, please let us know that you are aware of all these points.
|
130 |
+
|
131 |
+
We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model.
|
132 |
+
|
133 |
+
Evaluation
|
134 |
+
----------
|
135 |
+
Please check the `evaluation/` folder for the instructions.
|
136 |
+
|
137 |
+
License and Citation
|
138 |
+
----------
|
139 |
+
This repository can only be used for personal/research/non-commercial purposes. However, for commercial requests, please contact us directly at radrabha.m@research.iiit.ac.in or prajwal.k@research.iiit.ac.in. We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model. Please cite the following paper if you use this repository:
|
140 |
+
```
|
141 |
+
@inproceedings{10.1145/3394171.3413532,
|
142 |
+
author = {Prajwal, K R and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C.V.},
|
143 |
+
title = {A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild},
|
144 |
+
year = {2020},
|
145 |
+
isbn = {9781450379885},
|
146 |
+
publisher = {Association for Computing Machinery},
|
147 |
+
address = {New York, NY, USA},
|
148 |
+
url = {https://doi.org/10.1145/3394171.3413532},
|
149 |
+
doi = {10.1145/3394171.3413532},
|
150 |
+
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},
|
151 |
+
pages = {484β492},
|
152 |
+
numpages = {9},
|
153 |
+
keywords = {lip sync, talking face generation, video generation},
|
154 |
+
location = {Seattle, WA, USA},
|
155 |
+
series = {MM '20}
|
156 |
+
}
|
157 |
+
```
|
158 |
+
|
159 |
+
|
160 |
+
Acknowledgments
|
161 |
+
----------
|
162 |
+
Parts of the code structure are inspired by this [TTS repository](https://github.com/r9y9/deepvoice3_pytorch). We thank the author for this wonderful code. The code for Face Detection has been taken from the [face_alignment](https://github.com/1adrianb/face-alignment) repository. We thank the authors for releasing their code and models. We thank [zabique](https://github.com/zabique) for the tutorial collab notebook.
|