metadata
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_en32_lr201
results: []
speecht5_finetuned_en32_lr201
This model is a fine-tuned version of microsoft/speecht5_tts on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.1221
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1500
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.4125 | 27.5862 | 100 | 0.4236 |
0.3922 | 55.1724 | 200 | 0.4396 |
0.4041 | 82.7586 | 300 | 0.4569 |
1.2069 | 110.3448 | 400 | 1.5275 |
1.1191 | 137.9310 | 500 | 1.1303 |
1.1334 | 165.5172 | 600 | 1.1247 |
1.1152 | 193.1034 | 700 | 1.1261 |
1.1065 | 220.6897 | 800 | 1.1219 |
1.1053 | 248.2759 | 900 | 1.1215 |
1.1055 | 275.8621 | 1000 | 1.1253 |
1.1281 | 303.4483 | 1100 | 1.1225 |
1.1074 | 331.0345 | 1200 | 1.1230 |
1.1043 | 358.6207 | 1300 | 1.1228 |
1.1038 | 386.2069 | 1400 | 1.1227 |
1.1036 | 413.7931 | 1500 | 1.1221 |
Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1