Edit model card

EGY1.5K

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4590

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.5688 1.48 100 0.5112
0.5301 2.96 200 0.4908
0.5023 4.44 300 0.4707
0.524 5.93 400 0.4797
0.5001 7.41 500 0.4712
0.4774 8.89 600 0.4690
0.4793 10.37 700 0.4627
0.4666 11.85 800 0.4657
0.4649 13.33 900 0.4599
0.4659 14.81 1000 0.4616
0.4557 16.3 1100 0.4532
0.4516 17.78 1200 0.4535
0.4489 19.26 1300 0.4572
0.4431 20.74 1400 0.4504
0.4488 22.22 1500 0.4543
0.4452 23.7 1600 0.4557
0.4386 25.19 1700 0.4549
0.4297 26.67 1800 0.4487
0.4327 28.15 1900 0.4559
0.425 29.63 2000 0.4572
0.4251 31.11 2100 0.4531
0.4295 32.59 2200 0.4500
0.4258 34.07 2300 0.4561
0.4222 35.56 2400 0.4550
0.4119 37.04 2500 0.4569
0.4208 38.52 2600 0.4573
0.4145 40.0 2700 0.4568
0.4215 41.48 2800 0.4585
0.4141 42.96 2900 0.4594
0.4136 44.44 3000 0.4590

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu118
  • Datasets 3.0.0
  • Tokenizers 0.15.2
Downloads last month
6
Safetensors
Model size
144M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for rahafvii/EGY1.5K

Finetuned
(778)
this model