lucasnewman commited on
Commit
61aa02b
·
verified ·
1 Parent(s): 01fabf5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -3
README.md CHANGED
@@ -1,3 +1,131 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ ---
6
+
7
+ # Nanospeech
8
+ [Github](https://www.github.com/lucasnewman/nanospeech)
9
+
10
+ ### A simple, hackable text-to-speech system in PyTorch and MLX
11
+
12
+ Nanospeech is a research-oriented project to build a minimal, easy to understand text-to-speech system that scales to any level of compute. It supports voice matching from a reference speech sample, and comes with a variety of different voices built in.
13
+
14
+ An 82M parameter pretrained model (English-only) is available, which was trained on a single H100 GPU in a few days using only public domain data. The model is intentionally small to be a reproducible baseline and allow for fast inference. On recent M-series Apple Silicon or Nvidia GPUs, speech can be generated around ~3-5x faster than realtime.
15
+
16
+ All code and pretrained models are available under the MIT license, so you can modify and/or distribute them as you'd like.
17
+
18
+ ## Details
19
+
20
+ Nanospeech is based on a current [line of research](#citations) in text-to-speech systems which jointly learn text alignment and waveform generation. It's designed to use minimal input data — just audio and text — and avoid any auxiliary models, such as forced aligners or phonemizers.
21
+
22
+ There are two single-file implementations, one in [PyTorch](./nanospeech/nanospeech_torch.py) and one in [MLX](./nanospeech/nanospeech_mlx.py), which are near line-for-line equivalence where possible to make it easy to experiment with and modify. Each implementation is around 1,500 lines of code.
23
+
24
+ ## Quick Start
25
+
26
+ ```bash
27
+ pip install nanospeech
28
+ ```
29
+
30
+ ```bash
31
+ python -m nanospeech.generate --text "The quick brown fox jumps over the lazy dog."
32
+ ```
33
+
34
+ ### Voices
35
+
36
+ Use the `--voice` parameter to select the voice used for speech:
37
+
38
+ `celeste` — [Sample](https://s3.amazonaws.com/lucasnewman.datasets/nanospeech/samples/celeste.wav)
39
+
40
+ `luna` — [Sample](https://s3.amazonaws.com/lucasnewman.datasets/nanospeech/samples/luna.wav)
41
+
42
+ `nash` — [Sample](https://s3.amazonaws.com/lucasnewman.datasets/nanospeech/samples/nash.wav)
43
+
44
+ `orion` — [Sample](https://s3.amazonaws.com/lucasnewman.datasets/nanospeech/samples/orion.wav)
45
+
46
+ `rhea` — [Sample](https://s3.amazonaws.com/lucasnewman.datasets/nanospeech/samples/rhea.wav)
47
+
48
+ Note these voices are all based on samples from the [LibriTTS-R](https://www.openslr.org/141/) dataset.
49
+
50
+ ### Voice Matching
51
+
52
+ You can also provide a speech sample and a transcript to match to a specific voice, although the pretrained model has limited voice matching capabilities. See `python -m nanospeech.generate --help` for a full list of options to customize the voice.
53
+
54
+ ## Training a Model
55
+
56
+ Nanospeech includes a PyTorch-based trainer using Accelerate, and is compatible with DistributedDataParallel for multi-GPU training.
57
+
58
+ It supports streaming from any [WebDataset](https://github.com/webdataset/webdataset), but it should be straightforward to swap in your own dataloader as well. An ideal dataset consists of high-quality speech paired with clean transcriptions.
59
+
60
+ See the [examples](./examples/) for an example of training both the base model and the duration predictor on the large-scale [Emilia](https://huggingface.co/datasets/amphion/Emilia-Dataset) dataset (note: Emilia is CC BY-NC-4.0 licensed).
61
+
62
+ ## Limitations
63
+
64
+ As a research project, the pretrained model that comes with Nanospeech isn't designed for production usage. It may mispronounce words, has limited capability to match out-of-distribution voices, and can't generate very long speech samples.
65
+
66
+ However, the underlying architecture should scale up well to significantly more compute and larger datasets, so if training your own model is attractive, you can extend it to perform high-quality voice matching, multilingual speech generation, emotional expression, etc.
67
+
68
+ ## Citations
69
+
70
+ ```bibtex
71
+ @article{chen-etal-2024-f5tts,
72
+ title = {F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching},
73
+ author = {Yushen Chen and Zhikang Niu and Ziyang Ma and Keqi Deng and Chunhui Wang and Jian Zhao and Kai Yu and Xie Chen},
74
+ year = {2024},
75
+ url = {https://api.semanticscholar.org/CorpusID:273228169}
76
+ }
77
+ ```
78
+
79
+ ```bibtex
80
+ @inproceedings{Eskimez2024E2TE,
81
+ title = {E2 TTS: Embarrassingly Easy Fully Non-Autoregressive Zero-Shot TTS},
82
+ author = {Sefik Emre Eskimez and Xiaofei Wang and Manthan Thakker and Canrun Li and Chung-Hsien Tsai and Zhen Xiao and Hemin Yang and Zirun Zhu and Min Tang and Xu Tan and Yanqing Liu and Sheng Zhao and Naoyuki Kanda},
83
+ year = {2024},
84
+ url = {https://api.semanticscholar.org/CorpusID:270738197}
85
+ }
86
+ ```
87
+
88
+ ```bibtex
89
+ @article{Le2023VoiceboxTM,
90
+ title = {Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale},
91
+ author = {Matt Le and Apoorv Vyas and Bowen Shi and Brian Karrer and Leda Sari and Rashel Moritz and Mary Williamson and Vimal Manohar and Yossi Adi and Jay Mahadeokar and Wei-Ning Hsu},
92
+ year = {2023},
93
+ url = {https://api.semanticscholar.org/CorpusID:259275061}
94
+ }
95
+ ```
96
+
97
+ ```bibtex
98
+ @article{tong2023generalized,
99
+ title = {Improving and Generalizing Flow-Based Generative Models with Minibatch Optimal Transport},
100
+ author = {Alexander Tong and Joshua Fan and Ricky T. Q. Chen and Jesse Bettencourt and David Duvenaud},
101
+ year = {2023}
102
+ url = {https://api.semanticscholar.org/CorpusID:259847293}
103
+ }
104
+ ```
105
+
106
+ ```bibtex
107
+ @article{peebles2022scalable,
108
+ title = {Scalable Diffusion Models with Transformers},
109
+ author = {Peebles, William and Xie, Saining},
110
+ year = {2022},
111
+ url = {https://api.semanticscholar.org/CorpusID:254854389}
112
+ }
113
+ ```
114
+
115
+ ```bibtex
116
+ @article{lipman2022flow,
117
+ title = {Flow Matching for Generative Modeling},
118
+ author = {Yaron Lipman and Ricky T. Q. Chen and Heli Ben-Hamu and Maximilian Nickel and Matt Le},
119
+ year = {2022},
120
+ url = {https://api.semanticscholar.org/CorpusID:252734897}
121
+ }
122
+ ```
123
+
124
+ ```bibtex
125
+ @article{koizumi2023librittsr,
126
+ title = {LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus},
127
+ author = {Yuma Koizumi and Heiga Zen and Shigeki Karita and Yifan Ding and Kohei Yatabe and Nobuyuki Morioka and Michiel Bacchiani and Yu Zhang and Wei Han and Ankur Bapna},
128
+ year = {2023},
129
+ url = {https://api.semanticscholar.org/CorpusID:258967444}
130
+ }
131
+ ```