Youtube3kTTSModel / README.md
Mohsen21's picture
End of training
a24bd60 verified
metadata
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
  - generated_from_trainer
model-index:
  - name: Youtube3kTTSModel
    results: []

Youtube3kTTSModel

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4839

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6301 0.2222 100 0.5639
0.6013 0.4444 200 0.5471
0.5666 0.6667 300 0.5315
0.5632 0.8889 400 0.5254
0.5547 1.1111 500 0.5184
0.5583 1.3333 600 0.5211
0.5527 1.5556 700 0.5150
0.5508 1.7778 800 0.5123
0.5432 2.0 900 0.5135
0.5478 2.2222 1000 0.5077
0.5419 2.4444 1100 0.5073
0.5439 2.6667 1200 0.5083
0.5381 2.8889 1300 0.5108
0.5355 3.1111 1400 0.5075
0.5317 3.3333 1500 0.5053
0.5345 3.5556 1600 0.5022
0.5329 3.7778 1700 0.5006
0.53 4.0 1800 0.4965
0.5261 4.2222 1900 0.4971
0.5272 4.4444 2000 0.4976
0.5272 4.6667 2100 0.4943
0.5282 4.8889 2200 0.4938
0.5188 5.1111 2300 0.4980
0.523 5.3333 2400 0.4894
0.5225 5.5556 2500 0.4915
0.5178 5.7778 2600 0.4960
0.5165 6.0 2700 0.4893
0.5098 6.2222 2800 0.4892
0.512 6.4444 2900 0.4868
0.5177 6.6667 3000 0.4868
0.5128 6.8889 3100 0.4883
0.5062 7.1111 3200 0.4852
0.5104 7.3333 3300 0.4898
0.5126 7.5556 3400 0.4887
0.5093 7.7778 3500 0.4908
0.5075 8.0 3600 0.4828
0.5029 8.2222 3700 0.4842
0.5079 8.4444 3800 0.4850
0.5049 8.6667 3900 0.4853
0.5034 8.8889 4000 0.4849
0.4984 9.1111 4100 0.4833
0.5079 9.3333 4200 0.4863
0.5023 9.5556 4300 0.4830
0.5023 9.7778 4400 0.4833
0.5037 10.0 4500 0.4825
0.5035 10.2222 4600 0.4822
0.5011 10.4444 4700 0.4826
0.4969 10.6667 4800 0.4815
0.4958 10.8889 4900 0.4839
0.4972 11.1111 5000 0.4839

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1