File size: 3,467 Bytes
6ccff86
80b9068
f97b638
 
80b9068
f97b638
80b9068
 
f97b638
 
 
 
466a63f
f97b638
8377dc6
 
80b9068
f97b638
8377dc6
f97b638
 
8377dc6
 
f97b638
8377dc6
f97b638
 
 
6ccff86
 
466a63f
 
6ccff86
f97b638
6ccff86
f97b638
fea48e5
f97b638
 
6ccff86
466a63f
6ccff86
466a63f
6ccff86
466a63f
6ccff86
466a63f
6ccff86
466a63f
6ccff86
466a63f
6ccff86
466a63f
6ccff86
466a63f
6ccff86
466a63f
f97b638
fea48e5
466a63f
 
 
 
 
f97b638
466a63f
6ccff86
8377dc6
 
f97b638
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8377dc6
 
466a63f
6ccff86
466a63f
f97b638
466a63f
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
library_name: transformers
language:
- ne
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- kiranpantha/OpenSLR54-Balanced-Nepali
metrics:
- wer
model-index:
- name: XLSR-300M-Nepali
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: OpenSLR54
      type: kiranpantha/OpenSLR54-Balanced-Nepali
      config: default
      split: test
      args: 'config: ne, split: train,test'
    metrics:
    - name: Wer
      type: wer
      value: 0.5244204160175937
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# XLSR-300M-Nepali

This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OpenSLR54 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2681
- Wer: 0.5244

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Wer    |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.2642        | 0.0722 | 300  | 2.9627          | 1.0    |
| 2.1949        | 0.1444 | 600  | 1.5526          | 1.0160 |
| 1.4595        | 0.2166 | 900  | 1.1674          | 0.9810 |
| 1.2128        | 0.2888 | 1200 | 0.9901          | 0.9668 |
| 0.976         | 0.3610 | 1500 | 0.6942          | 0.7696 |
| 0.8267        | 0.4332 | 1800 | 0.6314          | 0.7552 |
| 0.7542        | 0.5054 | 2100 | 0.5522          | 0.7156 |
| 0.7228        | 0.5776 | 2400 | 0.5210          | 0.6960 |
| 0.6707        | 0.6498 | 2700 | 0.4744          | 0.6581 |
| 0.6368        | 0.7220 | 3000 | 0.4529          | 0.6535 |
| 0.5944        | 0.7942 | 3300 | 0.4229          | 0.6264 |
| 0.5651        | 0.8664 | 3600 | 0.4061          | 0.6161 |
| 0.5469        | 0.9386 | 3900 | 0.3788          | 0.6103 |
| 0.5308        | 1.0108 | 4200 | 0.3668          | 0.5957 |
| 0.4684        | 1.0830 | 4500 | 0.3509          | 0.5920 |
| 0.4382        | 1.1552 | 4800 | 0.3398          | 0.5920 |
| 0.4424        | 1.2274 | 5100 | 0.3260          | 0.5767 |
| 0.4159        | 1.2996 | 5400 | 0.3189          | 0.5690 |
| 0.419         | 1.3718 | 5700 | 0.3067          | 0.5581 |
| 0.4114        | 1.4440 | 6000 | 0.3019          | 0.5568 |
| 0.3903        | 1.5162 | 6300 | 0.2982          | 0.5549 |
| 0.3915        | 1.5884 | 6600 | 0.2887          | 0.5493 |
| 0.3789        | 1.6606 | 6900 | 0.2813          | 0.5398 |
| 0.3725        | 1.7329 | 7200 | 0.2763          | 0.5339 |
| 0.3706        | 1.8051 | 7500 | 0.2704          | 0.5285 |
| 0.3624        | 1.8773 | 7800 | 0.2706          | 0.5264 |
| 0.357         | 1.9495 | 8100 | 0.2681          | 0.5244 |


### Framework versions

- Transformers 4.45.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1