ArabicTTS / README.md
CarmelaFinianos's picture
End of training
df16e5d verified
|
raw
history blame
2.44 kB
metadata
library_name: transformers
license: mit
base_model: MBZUAI/speecht5_tts_clartts_ar
tags:
  - generated_from_trainer
model-index:
  - name: Arabictts
    results: []

Arabictts

This model is a fine-tuned version of MBZUAI/speecht5_tts_clartts_ar on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5481

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 900
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6582 3.7736 50 0.5892
0.603 7.5472 100 0.5603
0.5828 11.3208 150 0.5545
0.566 15.0943 200 0.5418
0.5504 18.8679 250 0.5393
0.5379 22.6415 300 0.5357
0.534 26.4151 350 0.5347
0.5226 30.1887 400 0.5352
0.5159 33.9623 450 0.5335
0.5058 37.7358 500 0.5350
0.5048 41.5094 550 0.5356
0.4994 45.2830 600 0.5367
0.4939 49.0566 650 0.5370
0.4923 52.8302 700 0.5366
0.488 56.6038 750 0.5397
0.4841 60.3774 800 0.5401
0.4834 64.1509 850 0.5490
0.4794 67.9245 900 0.5481

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3