iamTangsang
commited on
Commit
•
fe42c62
1
Parent(s):
affef11
Update README.md
Browse files
README.md
CHANGED
@@ -52,6 +52,8 @@ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://hug
|
|
52 |
## Model description
|
53 |
|
54 |
The model is a fine-tuned version of Wav2Vec2 XLS-R 300 million parameters version fine-tuned for Nepali Automatic Speech Recognition. The reported results are on the OpenSLR test split.
|
|
|
|
|
55 |
|
56 |
|
57 |
## Intended uses & limitations
|
@@ -103,6 +105,7 @@ The following hyperparameters were used during training:
|
|
103 |
- mixed_precision_training: Native AMP
|
104 |
|
105 |
### Initial Training on OpenSLR-54 for 16 epochs
|
|
|
106 |
The following hyperparameters were used:
|
107 |
- learning_rate: 3e-04
|
108 |
- train_batch_size: 16
|
|
|
52 |
## Model description
|
53 |
|
54 |
The model is a fine-tuned version of Wav2Vec2 XLS-R 300 million parameters version fine-tuned for Nepali Automatic Speech Recognition. The reported results are on the OpenSLR test split.
|
55 |
+
- WER on OpenSLR: 16.82%
|
56 |
+
- CER on OpenSLR: 2.72%
|
57 |
|
58 |
|
59 |
## Intended uses & limitations
|
|
|
105 |
- mixed_precision_training: Native AMP
|
106 |
|
107 |
### Initial Training on OpenSLR-54 for 16 epochs
|
108 |
+
|
109 |
The following hyperparameters were used:
|
110 |
- learning_rate: 3e-04
|
111 |
- train_batch_size: 16
|