|
--- |
|
license: apache-2.0 |
|
base_model: openai/whisper-tiny |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- wer |
|
model-index: |
|
- name: whisper-tiny-fr |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# whisper-tiny-fr |
|
|
|
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.8198 |
|
- Wer: 0.8502 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 500 |
|
- training_steps: 6250 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Wer | |
|
|:-------------:|:-----:|:----:|:---------------:|:------:| |
|
| 0.6223 | 1.0 | 250 | 0.7567 | 0.7225 | |
|
| 0.475 | 2.0 | 500 | 0.6213 | 0.5461 | |
|
| 0.2938 | 3.0 | 750 | 0.5860 | 0.5383 | |
|
| 0.1613 | 4.0 | 1000 | 0.5903 | 0.4384 | |
|
| 0.1026 | 5.0 | 1250 | 0.5992 | 0.4451 | |
|
| 0.0615 | 6.0 | 1500 | 0.6322 | 0.5383 | |
|
| 0.0422 | 7.0 | 1750 | 0.6398 | 0.4373 | |
|
| 0.019 | 8.0 | 2000 | 0.6682 | 0.5239 | |
|
| 0.0125 | 9.0 | 2250 | 0.6980 | 0.6681 | |
|
| 0.0069 | 10.0 | 2500 | 0.7335 | 0.8679 | |
|
| 0.0039 | 11.0 | 2750 | 0.7354 | 0.6238 | |
|
| 0.0026 | 12.0 | 3000 | 0.7458 | 0.6315 | |
|
| 0.0021 | 13.0 | 3250 | 0.7599 | 0.6715 | |
|
| 0.0018 | 14.0 | 3500 | 0.7682 | 0.7103 | |
|
| 0.0015 | 15.0 | 3750 | 0.7750 | 0.7081 | |
|
| 0.0013 | 16.0 | 4000 | 0.7846 | 0.7125 | |
|
| 0.0012 | 17.0 | 4250 | 0.7897 | 0.7114 | |
|
| 0.001 | 18.0 | 4500 | 0.7962 | 0.9345 | |
|
| 0.0009 | 19.0 | 4750 | 0.8001 | 0.7170 | |
|
| 0.0009 | 20.0 | 5000 | 0.8074 | 0.8335 | |
|
| 0.0008 | 21.0 | 5250 | 0.8107 | 0.8424 | |
|
| 0.0007 | 22.0 | 5500 | 0.8152 | 0.8402 | |
|
| 0.0007 | 23.0 | 5750 | 0.8181 | 0.8446 | |
|
| 0.0007 | 24.0 | 6000 | 0.8187 | 0.8479 | |
|
| 0.0007 | 25.0 | 6250 | 0.8198 | 0.8502 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.32.0.dev0 |
|
- Pytorch 2.0.1+cu118 |
|
- Datasets 2.14.4 |
|
- Tokenizers 0.13.3 |
|
|