Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ReHiFace-S π€π€π€
|
2 |
+
|
3 |
+
## π Introduction
|
4 |
+
|
5 |
+
ReHiFace-S, short for βReal Time High-Fidelity Faceswapβ, is a real-time high-fidelity faceswap algorithm created by Silicon-based Intelligence. By open-sourcing the capabilities of digital human generation, developers can easily generate large-scale digital humans who they want, enabling real-time faceswap capability.
|
6 |
+
|
7 |
+
## πͺ Project features
|
8 |
+
|
9 |
+
- Real-time on NVIDIA GTX 1080Ti
|
10 |
+
- Zero-shot inference
|
11 |
+
- High Fidelity faceswap
|
12 |
+
- Support ONNX and live camera mode
|
13 |
+
- Support super resulution and color transfer
|
14 |
+
- Better Xseg model for face segment
|
15 |
+
|
16 |
+
## π₯ **Examples**
|
17 |
+
|
18 |
+
We show some faceswap examples. </br>
|
19 |
+
<p align="center">
|
20 |
+
<img src="./assets/demo20.gif" alt="showcase">
|
21 |
+
<br>
|
22 |
+
</p>
|
23 |
+
<p align="center">
|
24 |
+
<img src="./assets/demo10.gif" alt="showcase">
|
25 |
+
<br>
|
26 |
+
</p>
|
27 |
+
|
28 |
+
## π§ Getting Started
|
29 |
+
|
30 |
+
### Clone the code and prepare the environment
|
31 |
+
|
32 |
+
- Python >= 3.9 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
|
33 |
+
- [PyTorch >= 1.13](https://pytorch.org/)
|
34 |
+
- CUDA 11.7
|
35 |
+
- Linux Ubuntu20.04 </br>
|
36 |
+
|
37 |
+
```bash
|
38 |
+
conda create --name faceswap python=3.9
|
39 |
+
conda activate faceswap
|
40 |
+
pip install -r requirements.txt
|
41 |
+
```
|
42 |
+
|
43 |
+
## π Pretrained models
|
44 |
+
|
45 |
+
Download all pretrained weights from [Google Drive](https://drive.google.com/drive/folders/1hVWFXPIDwACqoKKtgXAJubYC_H4k5njc?usp=drive_link) or [Baidu Yun](https://pan.baidu.com/s/1Bn47xOjZg-oU7_WyAHu3EQ?pwd=9bjo). We have packed all weights in one directory π. Download and place them in `./pretrain_models` folder ensuring the directory structure is as follows:</br>
|
46 |
+
```python
|
47 |
+
pretrain_models
|
48 |
+
βββ 9O_865k.onnx
|
49 |
+
βββ CurricularFace.tjm
|
50 |
+
βββ gfpganv14_fp32_bs1_scale.onnx
|
51 |
+
βββ pfpld_robust_sim_bs1_8003.onnx
|
52 |
+
βββ scrfd_500m_bnkps_shape640x640.onnx
|
53 |
+
βββ xseg_230611_16_17.onnx
|
54 |
+
```
|
55 |
+
|
56 |
+
## π» How to Test
|
57 |
+
|
58 |
+
```python
|
59 |
+
CUDA_VISIBLE_DEICES='0' python inference.py
|
60 |
+
```
|
61 |
+
Or, you can change the input by specifying the `--src_img_path` and `--video_path` arguments:
|
62 |
+
```python
|
63 |
+
CUDA_VISIBLE_DEICES='0' python inference.py --src_img_path --video_path
|
64 |
+
```
|
65 |
+
|
66 |
+
### Live Cam faceswap
|
67 |
+
|
68 |
+
You should at least run by NVIDIA GTX 1080Ti. </br>
|
69 |
+
|
70 |
+
***Notice: The time taken to render to a video and warm up the models are not included.*** </br>
|
71 |
+
|
72 |
+
Not support Super Resolution.
|
73 |
+
```python
|
74 |
+
CUDA_VISIBLE_DEICES='0' python inference_cam.py
|
75 |
+
```
|
76 |
+
***Notice: Support change source face during live with 'data/image_feature_dict.pkl' !***
|
77 |
+
<p align="center">
|
78 |
+
<img src="./assets/cam_demo1.gif" alt="showcase">
|
79 |
+
<br>
|
80 |
+
</p>
|
81 |
+
<p align="center">
|
82 |
+
<img src="./assets/cam_demo2.gif" alt="showcase">
|
83 |
+
<br>
|
84 |
+
</p>
|
85 |
+
|
86 |
+
## π€ Gradio interface
|
87 |
+
|
88 |
+
We also provide a Gradio interface for a better experience, just run by:
|
89 |
+
|
90 |
+
```bash
|
91 |
+
python app.py
|
92 |
+
```
|
93 |
+
|
94 |
+
## β¨ Acknowledgments
|
95 |
+
|
96 |
+
- Thanks to [Hififace](https://github.com/johannwyh/HifiFace) for base faceswap framework.<br>
|
97 |
+
- Thanks to [CurricularFace](https://github.com/HuangYG123/CurricularFace) for pretrained face feature model.<br>
|
98 |
+
- Thanks to [Xseg](https://github.com/iperov/DeepFaceLab/tree/master) for base face segment framework.
|
99 |
+
- Thanks to [GFPGAN](https://github.com/TencentARC/GFPGAN) for face super resolution.
|
100 |
+
- Thanks to [LivePortrait](https://github.com/KwaiVGI/LivePortrait) and [duix.ai](https://github.com/GuijiAI/duix.ai) for README template.
|
101 |
+
|
102 |
+
|
103 |
+
## π Citation
|
104 |
+
|
105 |
+
If you find ReHiFace-S useful for your research, welcome to π this repo.
|