|
--- |
|
license: apache-2.0 |
|
tags: |
|
- audio-classification |
|
- generated_from_trainer |
|
metrics: |
|
- accuracy |
|
- precision |
|
- f1 |
|
model-index: |
|
- name: wav2vec2-large |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# wav2vec2-large |
|
|
|
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the galsenai/waxal_dataset dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.3413 |
|
- Accuracy: 0.9443 |
|
- Precision: 0.9780 |
|
- F1: 0.9604 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 3e-05 |
|
- train_batch_size: 12 |
|
- eval_batch_size: 12 |
|
- seed: 0 |
|
- gradient_accumulation_steps: 4 |
|
- total_train_batch_size: 48 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_ratio: 0.1 |
|
- num_epochs: 32.0 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | F1 | |
|
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:| |
|
| 4.6314 | 1.01 | 500 | 4.9165 | 0.0205 | 0.0028 | 0.0049 | |
|
| 3.7739 | 2.02 | 1000 | 4.4491 | 0.0356 | 0.0750 | 0.0252 | |
|
| 2.5035 | 3.04 | 1500 | 4.1429 | 0.1129 | 0.2672 | 0.1114 | |
|
| 1.5633 | 4.05 | 2000 | 3.1973 | 0.3676 | 0.6598 | 0.3830 | |
|
| 1.0538 | 5.06 | 2500 | 2.5479 | 0.5889 | 0.8417 | 0.6557 | |
|
| 0.7422 | 6.07 | 3000 | 1.4494 | 0.7825 | 0.8921 | 0.8194 | |
|
| 0.5762 | 7.08 | 3500 | 1.3168 | 0.7726 | 0.9277 | 0.8267 | |
|
| 0.46 | 8.1 | 4000 | 0.8783 | 0.8564 | 0.9532 | 0.8982 | |
|
| 0.4007 | 9.11 | 4500 | 0.7524 | 0.8738 | 0.9637 | 0.9137 | |
|
| 0.3374 | 10.12 | 5000 | 0.6386 | 0.8852 | 0.9678 | 0.9221 | |
|
| 0.3108 | 11.13 | 5500 | 0.5049 | 0.9106 | 0.9681 | 0.9373 | |
|
| 0.2735 | 12.15 | 6000 | 0.6097 | 0.8905 | 0.9624 | 0.9226 | |
|
| 0.2716 | 13.16 | 6500 | 0.4543 | 0.9000 | 0.9569 | 0.9206 | |
|
| 0.2484 | 14.17 | 7000 | 0.3965 | 0.9272 | 0.9742 | 0.9489 | |
|
| 0.228 | 15.18 | 7500 | 0.6807 | 0.8856 | 0.9777 | 0.9257 | |
|
| 0.2307 | 16.19 | 8000 | 0.5219 | 0.9174 | 0.9802 | 0.9464 | |
|
| 0.2169 | 17.21 | 8500 | 0.4630 | 0.9121 | 0.9677 | 0.9338 | |
|
| 0.1997 | 18.22 | 9000 | 0.5152 | 0.9128 | 0.9740 | 0.9398 | |
|
| 0.1921 | 19.23 | 9500 | 0.5105 | 0.9144 | 0.9867 | 0.9476 | |
|
| 0.1825 | 20.24 | 10000 | 0.6302 | 0.9053 | 0.9832 | 0.9407 | |
|
| 0.1786 | 21.25 | 10500 | 0.4602 | 0.9272 | 0.9813 | 0.9524 | |
|
| 0.1671 | 22.27 | 11000 | 0.5443 | 0.9147 | 0.9794 | 0.9444 | |
|
| 0.1623 | 23.28 | 11500 | 0.3413 | 0.9443 | 0.9780 | 0.9604 | |
|
| 0.1595 | 24.29 | 12000 | 0.4478 | 0.9288 | 0.9813 | 0.9531 | |
|
| 0.151 | 25.3 | 12500 | 0.4178 | 0.9360 | 0.9818 | 0.9571 | |
|
| 0.1472 | 26.32 | 13000 | 0.4154 | 0.9356 | 0.9833 | 0.9578 | |
|
| 0.1473 | 27.33 | 13500 | 0.4549 | 0.9318 | 0.9837 | 0.9561 | |
|
| 0.131 | 28.34 | 14000 | 0.3574 | 0.9424 | 0.9845 | 0.9621 | |
|
| 0.134 | 29.35 | 14500 | 0.4475 | 0.9333 | 0.9840 | 0.9568 | |
|
| 0.1282 | 30.36 | 15000 | 0.4012 | 0.9382 | 0.9837 | 0.9591 | |
|
| 0.1307 | 31.38 | 15500 | 0.3552 | 0.9428 | 0.9847 | 0.9624 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.27.0.dev0 |
|
- Pytorch 1.11.0+cu113 |
|
- Datasets 2.9.1.dev0 |
|
- Tokenizers 0.13.2 |
|
|