Edit model card

SpeechT5 STT Wav2Vec2

This model is a fine-tuned version of facebook/wav2vec2-base-960h on the Lj-Speech dataset. It achieves the following results on the evaluation set:

  • Loss: 644.5502

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
653.9298 0.3610 50 646.8141
840.6591 0.7220 100 646.6405
1030.5028 1.0830 150 644.3955
816.1288 1.4440 200 653.6038
651.3673 1.8051 250 647.9319
786.9055 2.1661 300 643.6482
655.5121 2.5271 350 647.9398
664.6528 2.8881 400 646.9968
653.3564 3.2491 450 653.9541
664.1251 3.6101 500 643.4816
674.7263 3.9711 550 644.8188
659.9671 4.3321 600 650.9330
861.3966 4.6931 650 644.5502

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
315M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Asim037/wav222vec222v2-stt

Finetuned
(121)
this model