Edit model card

ws_w2lm_base_distill_noisy_teacher_libri_epochs_50_batch_8

This model is a fine-tuned version of rohitp1/kkkh_w2lm_base_plus_finetune_teacher_noise_libri360_50_epochs_batch_16 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0945
  • Wer: 0.1041

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 256
  • total_train_batch_size: 2048
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0562 2.46 250 0.0741 0.1135
0.0538 4.92 500 0.0736 0.1126
0.0506 7.38 750 0.0751 0.1116
0.0465 9.84 1000 0.0752 0.1099
0.0424 12.31 1250 0.0762 0.1089
0.0385 14.77 1500 0.0790 0.1078
0.0355 17.23 1750 0.0788 0.1062
0.0335 19.69 2000 0.0795 0.1053
0.0314 22.15 2250 0.0825 0.1052
0.0298 24.61 2500 0.0837 0.1055
0.0285 27.07 2750 0.0873 0.1049
0.0274 29.53 3000 0.0868 0.1043
0.0266 32.0 3250 0.0891 0.1044
0.0256 34.46 3500 0.0902 0.1044
0.0251 36.92 3750 0.0911 0.1044
0.0247 39.38 4000 0.0926 0.1042
0.0242 41.84 4250 0.0936 0.1042
0.0238 44.3 4500 0.0940 0.1042
0.0235 46.76 4750 0.0938 0.1042
0.0233 49.22 5000 0.0945 0.1041

Framework versions

  • Transformers 4.29.2
  • Pytorch 1.13.1
  • Datasets 2.7.1
  • Tokenizers 0.11.0
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.