wav2vec2-base-finetuned-sentiment-mesd-v9
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3500
- Accuracy: 0.9154
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
No log | 0.86 | 3 | 1.7825 | 0.1846 |
1.9553 | 1.86 | 6 | 1.7212 | 0.4308 |
1.9553 | 2.86 | 9 | 1.6164 | 0.3769 |
2.002 | 3.86 | 12 | 1.4904 | 0.3769 |
1.6191 | 4.86 | 15 | 1.4426 | 0.4385 |
1.6191 | 5.86 | 18 | 1.3516 | 0.5231 |
1.6209 | 6.86 | 21 | 1.2176 | 0.5538 |
1.6209 | 7.86 | 24 | 1.1683 | 0.5692 |
1.371 | 8.86 | 27 | 1.0885 | 0.5923 |
1.1568 | 9.86 | 30 | 1.0152 | 0.6385 |
1.1568 | 10.86 | 33 | 0.9289 | 0.6385 |
1.1023 | 11.86 | 36 | 0.9141 | 0.6308 |
1.1023 | 12.86 | 39 | 0.8526 | 0.6462 |
0.9448 | 13.86 | 42 | 0.8420 | 0.6769 |
0.7972 | 14.86 | 45 | 0.7976 | 0.6692 |
0.7972 | 15.86 | 48 | 0.8192 | 0.7308 |
0.7793 | 16.86 | 51 | 0.7108 | 0.7615 |
0.7793 | 17.86 | 54 | 0.6712 | 0.7769 |
0.6468 | 18.86 | 57 | 0.6684 | 0.7923 |
0.5083 | 19.86 | 60 | 0.6922 | 0.7385 |
0.5083 | 20.86 | 63 | 0.6148 | 0.7923 |
0.4988 | 21.86 | 66 | 0.5846 | 0.7923 |
0.4988 | 22.86 | 69 | 0.6050 | 0.8154 |
0.4123 | 23.86 | 72 | 0.5506 | 0.7846 |
0.3511 | 24.86 | 75 | 0.6095 | 0.7846 |
0.3511 | 25.86 | 78 | 0.5916 | 0.8154 |
0.3268 | 26.86 | 81 | 0.5912 | 0.8077 |
0.3268 | 27.86 | 84 | 0.5142 | 0.8538 |
0.3036 | 28.86 | 87 | 0.5492 | 0.8077 |
0.3066 | 29.86 | 90 | 0.6007 | 0.8231 |
0.3066 | 30.86 | 93 | 0.5748 | 0.8231 |
0.2538 | 31.86 | 96 | 0.6027 | 0.7692 |
0.2538 | 32.86 | 99 | 0.6979 | 0.7462 |
0.2281 | 33.86 | 102 | 0.7002 | 0.7615 |
0.2183 | 34.86 | 105 | 0.6650 | 0.7769 |
0.2183 | 35.86 | 108 | 0.5192 | 0.8462 |
0.2202 | 36.86 | 111 | 0.5389 | 0.8308 |
0.2202 | 37.86 | 114 | 0.5050 | 0.8385 |
0.1906 | 38.86 | 117 | 0.5722 | 0.7769 |
0.154 | 39.86 | 120 | 0.5239 | 0.8308 |
0.154 | 40.86 | 123 | 0.4448 | 0.8615 |
0.1474 | 41.86 | 126 | 0.4623 | 0.8615 |
0.1474 | 42.86 | 129 | 0.4282 | 0.8615 |
0.1345 | 43.86 | 132 | 0.5087 | 0.8615 |
0.1567 | 44.86 | 135 | 0.4859 | 0.8385 |
0.1567 | 45.86 | 138 | 0.6603 | 0.8077 |
0.1731 | 46.86 | 141 | 0.5379 | 0.8385 |
0.1731 | 47.86 | 144 | 0.8666 | 0.7538 |
0.1606 | 48.86 | 147 | 0.7518 | 0.8 |
0.1484 | 49.86 | 150 | 0.5986 | 0.8385 |
0.1484 | 50.86 | 153 | 0.6368 | 0.8231 |
0.2256 | 51.86 | 156 | 0.4639 | 0.8692 |
0.2256 | 52.86 | 159 | 0.5533 | 0.8462 |
0.1178 | 53.86 | 162 | 0.5038 | 0.8615 |
0.0815 | 54.86 | 165 | 0.5052 | 0.8692 |
0.0815 | 55.86 | 168 | 0.4337 | 0.8846 |
0.0998 | 56.86 | 171 | 0.4422 | 0.8769 |
0.0998 | 57.86 | 174 | 0.4317 | 0.8692 |
0.0855 | 58.86 | 177 | 0.4025 | 0.8923 |
0.0962 | 59.86 | 180 | 0.4605 | 0.8769 |
0.0962 | 60.86 | 183 | 0.4356 | 0.8769 |
0.0763 | 61.86 | 186 | 0.4614 | 0.8769 |
0.0763 | 62.86 | 189 | 0.4382 | 0.8846 |
0.0902 | 63.86 | 192 | 0.4701 | 0.8692 |
0.0654 | 64.86 | 195 | 0.4922 | 0.8692 |
0.0654 | 65.86 | 198 | 0.5413 | 0.8538 |
0.0651 | 66.86 | 201 | 0.5759 | 0.8615 |
0.0651 | 67.86 | 204 | 0.4238 | 0.9 |
0.0822 | 68.86 | 207 | 0.3500 | 0.9154 |
0.0625 | 69.86 | 210 | 0.3878 | 0.8923 |
0.0625 | 70.86 | 213 | 0.4952 | 0.8615 |
0.0548 | 71.86 | 216 | 0.4544 | 0.8615 |
0.0548 | 72.86 | 219 | 0.5497 | 0.8769 |
0.054 | 73.86 | 222 | 0.4434 | 0.8846 |
0.0543 | 74.86 | 225 | 0.4732 | 0.8769 |
0.0543 | 75.86 | 228 | 0.4425 | 0.8923 |
0.0881 | 76.86 | 231 | 0.4788 | 0.8769 |
0.0881 | 77.86 | 234 | 0.5448 | 0.8769 |
0.061 | 78.86 | 237 | 0.4221 | 0.9077 |
0.0567 | 79.86 | 240 | 0.4404 | 0.8769 |
0.0567 | 80.86 | 243 | 0.4099 | 0.9 |
0.052 | 81.86 | 246 | 0.5259 | 0.8769 |
0.052 | 82.86 | 249 | 0.5874 | 0.8692 |
0.0444 | 83.86 | 252 | 0.5555 | 0.8846 |
0.0332 | 84.86 | 255 | 0.5156 | 0.8615 |
0.0332 | 85.86 | 258 | 0.4564 | 0.8615 |
0.0449 | 86.86 | 261 | 0.4826 | 0.8692 |
0.0449 | 87.86 | 264 | 0.4726 | 0.8615 |
0.0385 | 88.86 | 267 | 0.4206 | 0.8846 |
0.0356 | 89.86 | 270 | 0.4050 | 0.8769 |
0.0356 | 90.86 | 273 | 0.4161 | 0.8923 |
0.0391 | 91.86 | 276 | 0.4100 | 0.9077 |
0.0391 | 92.86 | 279 | 0.4047 | 0.9 |
0.0249 | 93.86 | 282 | 0.4044 | 0.9 |
0.0399 | 94.86 | 285 | 0.3968 | 0.8846 |
0.0399 | 95.86 | 288 | 0.3802 | 0.9 |
0.031 | 96.86 | 291 | 0.3689 | 0.9 |
0.031 | 97.86 | 294 | 0.3616 | 0.9077 |
0.036 | 98.86 | 297 | 0.3584 | 0.9077 |
0.0386 | 99.86 | 300 | 0.3574 | 0.9077 |
Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.