NekoMikoReimu commited on
Commit
61ac28a
β€’
1 Parent(s): 9a3db80

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +121 -0
README.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## DDDM-VC: Decoupled Denoising Diffusion Models with Disentangled Representation and Prior Mixup for Verified Robust Voice Conversion
2
+
3
+ The official Pytorch implementation of DDDM-VC (AAAI 2024)
4
+
5
+ [Ha-Yeong Choi*](https://github.com/hayeong0), [Sang-Hoon Lee*](https://github.com/sh-lee-prml), Seong-Whan Lee
6
+
7
+ ## [Paper](https://arxiv.org/abs/2305.15816) | [Project Page](https://hayeong0.github.io/DDDM-VC-demo/) | [Audio Sample](https://dddm-vc.github.io/demo/)
8
+
9
+ ![image](https://github.com/hayeong0/DDDM-VC/assets/47182864/8c2e862a-5ac2-4720-b8fd-0d8967bcc92b)
10
+ <p align="center"><em> Overall architecture </em>
11
+
12
+ > Diffusion-based generative models have recently exhibited powerful generative performance. However, as many attributes exist in the data distribution and owing to several limitations of sharing the model parameters across all levels of the generation process, it remains challenging to control specific styles for each attribute. To address the above problem, we introduce decoupled denoising diffusion models (DDDMs) with disentangled representations, which can enable effective style transfers for each attribute in generative models. In particular, we apply DDDMs for voice conversion (VC) tasks, tackling the intricate challenge of disentangling and individually transferring each speech attributes such as linguistic information, intonation, and timbre. First, we use a self-supervised representation to disentangle the speech representation. Subsequently, the DDDMs are applied to resynthesize the speech from the disentangled representations for style transfer with respect to each attribute. Moreover, we also propose the prior mixup for robust voice style transfer, which uses the converted representation of the mixed style as a prior distribution for the diffusion models. The experimental results reveal that our method outperforms publicly available VC models. Furthermore, we show that our method provides robust generative performance even when using a smaller model size.
13
+
14
+
15
+ ## πŸ“‘ Pre-trained Model
16
+ Our model checkpoints can be downloaded [here](https://drive.google.com/drive/folders/1tDIQ5Nv-X2svhcww35LWMC1El3SDlI_I?usp=sharing).
17
+
18
+ - model_base.pth
19
+ - voc_ckpt.pth
20
+ - voc_bigvgan.pth
21
+ - f0_vqvae.pth
22
+
23
+
24
+
25
+ ## βš™οΈ Setup
26
+ 1. Clone this rep && Install python requirement
27
+
28
+ ```
29
+ git clone https://github.com/hayeong0/DDDM-VC.git
30
+ pip install -r req*
31
+ ```
32
+ 2. Download the pre-trained model checkpoint from drive.
33
+
34
+ ## πŸ”¨ Usage
35
+ ### Preprocess
36
+ 1. Data
37
+ - Training requires both wav files and F0 features, which we extract using YAAPT through the script `./preprocess/extract_f0.py`.
38
+ - After extracting F0, create a list of files with the path to each data item, as shown in the following example:
39
+ ```
40
+ train_wav.txt
41
+ /workspace/raid/dataset/LibriTTS_16k/train-clean-360/100/121669/100_121669_000001_000000.wav
42
+ /workspace/raid/dataset/LibriTTS_16k/train-clean-360/100/121669/100_121669_000003_000000.wav
43
+
44
+ train_f0.txt
45
+ /workspace/raid/dataset/LibriTTS_f0_norm/train-clean-360/100/121669/100_121669_000001_000000.pt
46
+ /workspace/raid/dataset/LibriTTS_f0_norm/train-clean-360/100/121669/100_121669_000003_000000.pt
47
+ ```
48
+
49
+ 2. F0_VQVAE
50
+ - We trained the f0_vqvae model using [SpeechResynthesis repository](https://github.com/facebookresearch/speech-resynthesis).
51
+
52
+
53
+ ### πŸ” Training
54
+ - For training, prepare a file list with the following structure:
55
+ ```
56
+ |-- filelist
57
+ | |-- train_f0.txt
58
+ | |-- train_wav.txt
59
+ | |-- test_f0.txt
60
+ | `-- test_wav.txt
61
+ ```
62
+ - Run `train_dddmvc.py`
63
+
64
+
65
+ ### πŸ”‘ Inference
66
+ - Run `infer.sh`
67
+
68
+ ```
69
+ bash infer.sh
70
+
71
+ python3 inference.py \
72
+ --src_path './sample/src_p227_013.wav' \
73
+ --trg_path './sample/tar_p229_005.wav' \
74
+ --ckpt_model './ckpt/model_base.pth' \
75
+ --ckpt_voc './vocoder/voc_ckpt.pth' \
76
+ --ckpt_f0_vqvae './f0_vqvae/f0_vqvae.pth' \
77
+ --output_dir './converted' \
78
+ -t 6
79
+ ```
80
+
81
+ 🎧 Train and test it on your own dataset and share your interesting results! πŸ€—
82
+
83
+
84
+
85
+ ## πŸŽ“ Citation
86
+ ```
87
+ @article{choi2023dddm,
88
+ title={DDDM-VC: Decoupled Denoising Diffusion Models with Disentangled Representation and Prior Mixup for Verified Robust Voice Conversion},
89
+ author={Choi, Ha-Yeong and Lee, Sang-Hoon and Lee, Seong-Whan},
90
+ journal={arXiv preprint arXiv:2305.15816},
91
+ year={2023}
92
+ }
93
+ ```
94
+
95
+
96
+
97
+ ## πŸ’Ž Acknowledgements
98
+ - [DiffVC](https://github.com/huawei-noah/Speech-Backbones/tree/main/DiffVC): for overall diffusion src code
99
+ - [Speech-Resynthesis](https://github.com/facebookresearch/speech-resynthesis): for f0-vqvae
100
+ - [HiFiGAN](https://github.com/jik876/hifi-gan): for vocoder
101
+ - [torch-nansypp](https://github.com/revsic/torch-nansypp): for data augmentation
102
+
103
+ ## License
104
+ This work is licensed under a
105
+ [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
106
+
107
+ [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
108
+
109
+ [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
110
+ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
111
+ [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
112
+
113
+
114
+
115
+
116
+
117
+
118
+
119
+
120
+
121
+