Update README.md
Browse files
README.md
CHANGED
@@ -37,19 +37,35 @@ should probably proofread and complete it, then remove this comment. -->
|
|
37 |
|
38 |
# wav2vec2-xlsr-1b-finnish
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## Model description
|
43 |
|
44 |
-
|
45 |
|
46 |
## Intended uses & limitations
|
47 |
|
48 |
-
|
49 |
|
50 |
## Training and evaluation data
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
## Training procedure
|
55 |
|
@@ -60,7 +76,7 @@ The following hyperparameters were used during training:
|
|
60 |
- train_batch_size: 32
|
61 |
- eval_batch_size: 8
|
62 |
- seed: 42
|
63 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
64 |
- lr_scheduler_type: linear
|
65 |
- lr_scheduler_warmup_steps: 500
|
66 |
- num_epochs: 5
|
|
|
37 |
|
38 |
# wav2vec2-xlsr-1b-finnish
|
39 |
|
40 |
+
**Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [aapot/wav2vec2-xlsr-1b-finnish-v2](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-v2)
|
41 |
+
|
42 |
+
**Note:**: there is a language model version of this acoustic only model which achieves better test results thanks to the added language model: [aapot/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm)
|
43 |
+
|
44 |
+
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data.
|
45 |
+
It achieves the following results on the Common Voice 7 test set together without language model:
|
46 |
+
- Wer: 13.11
|
47 |
+
- Cer: 2.23
|
48 |
|
49 |
## Model description
|
50 |
|
51 |
+
TODO
|
52 |
|
53 |
## Intended uses & limitations
|
54 |
|
55 |
+
TODO
|
56 |
|
57 |
## Training and evaluation data
|
58 |
|
59 |
+
This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
|
60 |
+
|
61 |
+
| Dataset | Hours | % of total hours |
|
62 |
+
|:------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
|
63 |
+
| [Common Voice 7.0 Finnish train+evaluation+other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
|
64 |
+
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
|
65 |
+
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
|
66 |
+
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
|
67 |
+
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
|
68 |
+
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
|
69 |
|
70 |
## Training procedure
|
71 |
|
|
|
76 |
- train_batch_size: 32
|
77 |
- eval_batch_size: 8
|
78 |
- seed: 42
|
79 |
+
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
|
80 |
- lr_scheduler_type: linear
|
81 |
- lr_scheduler_warmup_steps: 500
|
82 |
- num_epochs: 5
|