{ "cells": [ { "cell_type": "markdown", "id": "f79d99ef", "metadata": {}, "source": [ "# Train your first 🐸 TTS model 💫\n", "\n", "### 👋 Hello and welcome to Coqui (🐸) TTS\n", "\n", "The goal of this notebook is to show you a **typical workflow** for **training** and **testing** a TTS model with 🐸.\n", "\n", "Let's train a very small model on a very small amount of data so we can iterate quickly.\n", "\n", "In this notebook, we will:\n", "\n", "1. Download data and format it for 🐸 TTS.\n", "2. Configure the training and testing runs.\n", "3. Train a new model.\n", "4. Test the model and display its performance.\n", "\n", "So, let's jump right in!\n" ] }, { "cell_type": "code", "execution_count": null, "id": "fa2aec78", "metadata": {}, "outputs": [], "source": [ "## Install Coqui TTS\n", "! pip install -U pip\n", "! pip install TTS" ] }, { "cell_type": "markdown", "id": "be5fe49c", "metadata": {}, "source": [ "## ✅ Data Preparation\n", "\n", "### **First things first**: we need some data.\n", "\n", "We're training a Text-to-Speech model, so we need some _text_ and we need some _speech_. Specificially, we want _transcribed speech_. The speech must be divided into audio clips and each clip needs transcription. More details about data requirements such as recording characteristics, background noise and vocabulary coverage can be found in the [🐸TTS documentation](https://tts.readthedocs.io/en/latest/formatting_your_dataset.html).\n", "\n", "If you have a single audio file and you need to **split** it into clips. It is also important to use a lossless audio file format to prevent compression artifacts. We recommend using **wav** file format.\n", "\n", "The data format we will be adopting for this tutorial is taken from the widely-used **LJSpeech** dataset, where **waves** are collected under a folder:\n", "\n", "\n", "/wavs
\n", "  | - audio1.wav
\n", "  | - audio2.wav
\n", "  | - audio3.wav
\n", " ...
\n", "
\n", "\n", "and a **metadata.csv** file will have the audio file name in parallel to the transcript, delimited by `|`: \n", " \n", "\n", "# metadata.csv
\n", "audio1|This is my sentence.
\n", "audio2|This is maybe my sentence.
\n", "audio3|This is certainly my sentence.
\n", "audio4|Let this be your sentence.
\n", "...\n", "
\n", "\n", "In the end, we should have the following **folder structure**:\n", "\n", "\n", "/MyTTSDataset
\n", " |
\n", " | -> metadata.csv
\n", " | -> /wavs
\n", "  | -> audio1.wav
\n", "  | -> audio2.wav
\n", "  | ...
\n", "
" ] }, { "cell_type": "markdown", "id": "69501a10-3b53-4e75-ae66-90221d6f2271", "metadata": {}, "source": [ "🐸TTS already provides tooling for the _LJSpeech_. if you use the same format, you can start training your models right away.
\n", "\n", "After you collect and format your dataset, you need to check two things. Whether you need a **_formatter_** and a **_text_cleaner_**.
The **_formatter_** loads the text file (created above) as a list and the **_text_cleaner_** performs a sequence of text normalization operations that converts the raw text into the spoken representation (e.g. converting numbers to text, acronyms, and symbols to the spoken format).\n", "\n", "If you use a different dataset format then the LJSpeech or the other public datasets that 🐸TTS supports, then you need to write your own **_formatter_** and **_text_cleaner_**." ] }, { "cell_type": "markdown", "id": "e7f226c8-4e55-48fa-937b-8415d539b17c", "metadata": {}, "source": [ "## ⏳️ Loading your dataset\n", "Load one of the dataset supported by 🐸TTS.\n", "\n", "We will start by defining dataset config and setting LJSpeech as our target dataset and define its path.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "b3cb0191-b8fc-4158-bd26-8423c2a8ba66", "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "# BaseDatasetConfig: defines name, formatter and path of the dataset.\n", "from TTS.tts.configs.shared_configs import BaseDatasetConfig\n", "\n", "output_path = \"tts_train_dir\"\n", "if not os.path.exists(output_path):\n", " os.makedirs(output_path)\n", " " ] }, { "cell_type": "code", "execution_count": null, "id": "ae6b7019-3685-4b48-8917-c152e288d7e3", "metadata": {}, "outputs": [], "source": [ "# Download and extract LJSpeech dataset.\n", "\n", "!wget -O $output_path/LJSpeech-1.1.tar.bz2 https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 \n", "!tar -xf $output_path/LJSpeech-1.1.tar.bz2 -C $output_path" ] }, { "cell_type": "code", "execution_count": null, "id": "76cd3ab5-6387-45f1-b488-24734cc1beb5", "metadata": {}, "outputs": [], "source": [ "dataset_config = BaseDatasetConfig(\n", " formatter=\"ljspeech\", meta_file_train=\"metadata.csv\", path=os.path.join(output_path, \"LJSpeech-1.1/\")\n", ")" ] }, { "cell_type": "markdown", "id": "ae82fd75", "metadata": {}, "source": [ "## ✅ Train a new model\n", "\n", "Let's kick off a training run 🚀🚀🚀.\n", "\n", "Deciding on the model architecture you'd want to use is based on your needs and available resources. Each model architecture has it's pros and cons that define the run-time efficiency and the voice quality.\n", "We have many recipes under `TTS/recipes/` that provide a good starting point. For this tutorial, we will be using `GlowTTS`." ] }, { "cell_type": "markdown", "id": "f5876e46-2aee-4bcf-b6b3-9e3c535c553f", "metadata": {}, "source": [ "We will begin by initializing the model training configuration." ] }, { "cell_type": "code", "execution_count": null, "id": "5483ca28-39d6-49f8-a18e-4fb53c50ad84", "metadata": {}, "outputs": [], "source": [ "# GlowTTSConfig: all model related values for training, validating and testing.\n", "from TTS.tts.configs.glow_tts_config import GlowTTSConfig\n", "config = GlowTTSConfig(\n", " batch_size=32,\n", " eval_batch_size=16,\n", " num_loader_workers=4,\n", " num_eval_loader_workers=4,\n", " run_eval=True,\n", " test_delay_epochs=-1,\n", " epochs=100,\n", " text_cleaner=\"phoneme_cleaners\",\n", " use_phonemes=True,\n", " phoneme_language=\"en-us\",\n", " phoneme_cache_path=os.path.join(output_path, \"phoneme_cache\"),\n", " print_step=25,\n", " print_eval=False,\n", " mixed_precision=True,\n", " output_path=output_path,\n", " datasets=[dataset_config],\n", " save_step=1000,\n", ")" ] }, { "cell_type": "markdown", "id": "b93ed377-80b7-447b-bd92-106bffa777ee", "metadata": {}, "source": [ "Next we will initialize the audio processor which is used for feature extraction and audio I/O." ] }, { "cell_type": "code", "execution_count": null, "id": "b1b12f61-f851-4565-84dd-7640947e04ab", "metadata": {}, "outputs": [], "source": [ "from TTS.utils.audio import AudioProcessor\n", "ap = AudioProcessor.init_from_config(config)\n", "# Modify sample rate if for a custom audio dataset:\n", "# ap.sample_rate = 22050\n" ] }, { "cell_type": "markdown", "id": "1d461683-b05e-403f-815f-8007bda08c38", "metadata": {}, "source": [ "Next we will initialize the tokenizer which is used to convert text to sequences of token IDs. If characters are not defined in the config, default characters are passed to the config." ] }, { "cell_type": "code", "execution_count": null, "id": "014879b7-f18d-44c0-b24a-e10f8002113a", "metadata": {}, "outputs": [], "source": [ "from TTS.tts.utils.text.tokenizer import TTSTokenizer\n", "tokenizer, config = TTSTokenizer.init_from_config(config)" ] }, { "cell_type": "markdown", "id": "df3016e1-9e99-4c4f-94e3-fa89231fd978", "metadata": {}, "source": [ "Next we will load data samples. Each sample is a list of ```[text, audio_file_path, speaker_name]```. You can define your custom sample loader returning the list of samples." ] }, { "cell_type": "code", "execution_count": null, "id": "cadd6ada-c8eb-4f79-b8fe-6d72850af5a7", "metadata": {}, "outputs": [], "source": [ "from TTS.tts.datasets import load_tts_samples\n", "train_samples, eval_samples = load_tts_samples(\n", " dataset_config,\n", " eval_split=True,\n", " eval_split_max_size=config.eval_split_max_size,\n", " eval_split_size=config.eval_split_size,\n", ")" ] }, { "cell_type": "markdown", "id": "db8b451e-1fe1-4aa3-b69e-ab22b925bd19", "metadata": {}, "source": [ "Now we're ready to initialize the model.\n", "\n", "Models take a config object and a speaker manager as input. Config defines the details of the model like the number of layers, the size of the embedding, etc. Speaker manager is used by multi-speaker models." ] }, { "cell_type": "code", "execution_count": null, "id": "ac2ffe3e-ad0c-443e-800c-9b076ee811b4", "metadata": {}, "outputs": [], "source": [ "from TTS.tts.models.glow_tts import GlowTTS\n", "model = GlowTTS(config, ap, tokenizer, speaker_manager=None)" ] }, { "cell_type": "markdown", "id": "e2832c56-889d-49a6-95b6-eb231892ecc6", "metadata": {}, "source": [ "Trainer provides a generic API to train all the 🐸TTS models with all its perks like mixed-precision training, distributed training, etc." ] }, { "cell_type": "code", "execution_count": null, "id": "0f609945-4fe0-4d0d-b95e-11d7bfb63ebe", "metadata": {}, "outputs": [], "source": [ "from trainer import Trainer, TrainerArgs\n", "trainer = Trainer(\n", " TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples\n", ")" ] }, { "cell_type": "markdown", "id": "5b320831-dd83-429b-bb6a-473f9d49d321", "metadata": {}, "source": [ "### AND... 3,2,1... START TRAINING 🚀🚀🚀" ] }, { "cell_type": "code", "execution_count": null, "id": "d4c07f99-3d1d-4bea-801e-9f33bbff0e9f", "metadata": {}, "outputs": [], "source": [ "trainer.fit()" ] }, { "cell_type": "markdown", "id": "4cff0c40-2734-40a6-a905-e945a9fb3e98", "metadata": {}, "source": [ "#### 🚀 Run the Tensorboard. 🚀\n", "On the notebook and Tensorboard, you can monitor the progress of your model. Also Tensorboard provides certain figures and sample outputs." ] }, { "cell_type": "code", "execution_count": null, "id": "5a85cd3b-1646-40ad-a6c2-49323e08eeec", "metadata": {}, "outputs": [], "source": [ "!pip install tensorboard\n", "!tensorboard --logdir=tts_train_dir" ] }, { "cell_type": "markdown", "id": "9f6dc959", "metadata": {}, "source": [ "## ✅ Test the model\n", "\n", "We made it! 🙌\n", "\n", "Let's kick off the testing run, which displays performance metrics.\n", "\n", "We're committing the cardinal sin of ML 😈 (aka - testing on our training data) so you don't want to deploy this model into production. In this notebook we're focusing on the workflow itself, so it's forgivable 😇\n", "\n", "You can see from the test output that our tiny model has overfit to the data, and basically memorized this one sentence.\n", "\n", "When you start training your own models, make sure your testing data doesn't include your training data 😅" ] }, { "cell_type": "markdown", "id": "99fada7a-592f-4a09-9369-e6f3d82de3a0", "metadata": {}, "source": [ "Let's get the latest saved checkpoint. " ] }, { "cell_type": "code", "execution_count": null, "id": "6dd47ed5-da8e-4bf9-b524-d686630d6961", "metadata": {}, "outputs": [], "source": [ "import glob, os\n", "output_path = \"tts_train_dir\"\n", "ckpts = sorted([f for f in glob.glob(output_path+\"/*/*.pth\")])\n", "configs = sorted([f for f in glob.glob(output_path+\"/*/*.json\")])" ] }, { "cell_type": "code", "execution_count": null, "id": "dd42bc7a", "metadata": {}, "outputs": [], "source": [ " !tts --text \"Text for TTS\" \\\n", " --model_path $test_ckpt \\\n", " --config_path $test_config \\\n", " --out_path out.wav" ] }, { "cell_type": "markdown", "id": "81cbcb3f-d952-469b-a0d8-8941cd7af670", "metadata": {}, "source": [ "## 📣 Listen to the synthesized wave 📣" ] }, { "cell_type": "code", "execution_count": null, "id": "e0000bd6-6763-4a10-a74d-911dd08ebcff", "metadata": {}, "outputs": [], "source": [ "import IPython\n", "IPython.display.Audio(\"out.wav\")" ] }, { "cell_type": "markdown", "id": "13914401-cad1-494a-b701-474e52829138", "metadata": {}, "source": [ "## 🎉 Congratulations! 🎉 You now have trained your first TTS model! \n", "Follow up with the next tutorials to learn more advanced material." ] }, { "cell_type": "code", "execution_count": null, "id": "950d9fc6-896f-4a2c-86fd-8fd1fcbbb3f7", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 5 }