Edit model card

wav2vec2-large-xls-r-300m-dsb-base

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the dataset "maminorěcna dolnoserbšćina" (native Lower Sorbian corpus). The rights on this dataset are reserved by the Institute for the Study of the Language, History and Culture of the Lusatian Sorbs/Wends and Comparative Minority Research. In case of any copyright issues, feel free to contact me so I can take this model offline.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 12
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 3
  • total_train_batch_size: 36
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 1

Training results

Step Training Loss Validation Loss Wer
200 5.513600 3.467023 1.000000
400 3.261500 3.338312 1.000000
600 3.165600 3.269723 1.000000
800 3.061200 3.040675 0.970368
1000 2.900200 2.777975 0.999815
1200 2.628300 2.489217 0.977313
1400 2.238100 2.156038 0.955459
1600 1.997200 2.101610 0.940087
1800 1.755900 1.965557 0.902398
2000 1.558900 2.060277 0.949810
2200 1.384400 2.010253 0.899713
2400 1.201500 2.141238 0.926382
2600 1.035700 2.229016 0.906195
2800 0.853200 2.401224 0.881008
3000 0.714000 2.431344 0.890175
3200 0.608200 2.569438 0.893509

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TiMauzi/wav2vec2-large-xls-r-300m-dsb-base

Finetuned
(456)
this model