Edit model card

SER_model_xapiens_binary

This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9238
  • Accuracy: 0.6121

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 9 0.6576 0.6186
0.6373 2.0 18 0.6570 0.6239
0.6356 3.0 27 0.6528 0.6252
0.6255 4.0 36 0.6525 0.6212
0.6275 5.0 45 0.6459 0.6239
0.62 6.0 54 0.6564 0.6252
0.6079 7.0 63 0.6575 0.6173
0.6066 8.0 72 0.6592 0.6304
0.6032 9.0 81 0.6540 0.6265
0.5894 10.0 90 0.6553 0.6252
0.5894 11.0 99 0.6764 0.6239
0.5707 12.0 108 0.6768 0.6370
0.5665 13.0 117 0.6877 0.6330
0.5478 14.0 126 0.7169 0.6134
0.5303 15.0 135 0.7437 0.5767
0.5113 16.0 144 0.7235 0.6252
0.4867 17.0 153 0.7914 0.6055
0.4525 18.0 162 0.7906 0.6121
0.4339 19.0 171 0.7946 0.6029
0.4231 20.0 180 0.7982 0.6121
0.4231 21.0 189 0.7991 0.6278
0.3913 22.0 198 0.8757 0.6252
0.3967 23.0 207 0.8746 0.6042
0.3615 24.0 216 0.8731 0.5976
0.3354 25.0 225 0.9350 0.6173
0.3288 26.0 234 0.9604 0.6042
0.3087 27.0 243 0.9534 0.6199
0.2977 28.0 252 0.9491 0.6107
0.2657 29.0 261 1.0096 0.6134
0.2614 30.0 270 1.0069 0.6042
0.2614 31.0 279 1.0388 0.5990
0.2309 32.0 288 1.1028 0.5950
0.2155 33.0 297 1.1211 0.6239
0.2162 34.0 306 1.1376 0.5767
0.2055 35.0 315 1.2133 0.5806
0.1949 36.0 324 1.1371 0.5963
0.1897 37.0 333 1.2791 0.5793
0.1715 38.0 342 1.2176 0.5924
0.1622 39.0 351 1.2985 0.5832
0.1571 40.0 360 1.3201 0.5832
0.1571 41.0 369 1.3364 0.5976
0.1518 42.0 378 1.2600 0.6173
0.1715 43.0 387 1.2334 0.6134
0.1702 44.0 396 1.2732 0.6055
0.1292 45.0 405 1.3833 0.5963
0.1104 46.0 414 1.4520 0.5990
0.1572 47.0 423 1.3802 0.6291
0.132 48.0 432 1.4400 0.6003
0.1528 49.0 441 1.3557 0.6212
0.1219 50.0 450 1.3858 0.5911
0.1219 51.0 459 1.4453 0.5937
0.1014 52.0 468 1.3600 0.6212
0.0951 53.0 477 1.4042 0.6186
0.0962 54.0 486 1.4200 0.6199
0.0937 55.0 495 1.4017 0.6291
0.1005 56.0 504 1.4544 0.6081
0.0832 57.0 513 1.4527 0.6252
0.0896 58.0 522 1.4970 0.6068
0.0754 59.0 531 1.5413 0.6016
0.076 60.0 540 1.5303 0.6186
0.076 61.0 549 1.5076 0.6134
0.0945 62.0 558 1.4725 0.6225
0.0636 63.0 567 1.5103 0.6160
0.0866 64.0 576 1.5384 0.6304
0.0558 65.0 585 1.5868 0.6081
0.0607 66.0 594 1.5560 0.6265
0.0595 67.0 603 1.6161 0.6212
0.0566 68.0 612 1.6855 0.5911
0.042 69.0 621 1.6898 0.6186
0.0506 70.0 630 1.6505 0.6330
0.0506 71.0 639 1.6898 0.6107
0.048 72.0 648 1.6370 0.6225
0.0564 73.0 657 1.6576 0.6147
0.0512 74.0 666 1.7149 0.6003
0.0482 75.0 675 1.7085 0.6199
0.0351 76.0 684 1.7413 0.6134
0.0432 77.0 693 1.7627 0.6016
0.0368 78.0 702 1.7961 0.6291
0.0416 79.0 711 1.7539 0.6199
0.0398 80.0 720 1.7764 0.6160
0.0398 81.0 729 1.8516 0.6252
0.0311 82.0 738 1.8629 0.6199
0.0327 83.0 747 1.8564 0.6147
0.0404 84.0 756 1.8447 0.6291
0.0407 85.0 765 1.8198 0.6173
0.0432 86.0 774 1.8477 0.6029
0.0367 87.0 783 1.8972 0.6317
0.0334 88.0 792 1.8497 0.6121
0.0228 89.0 801 1.8638 0.6199
0.0351 90.0 810 1.8128 0.6107
0.0351 91.0 819 1.8479 0.6186
0.0342 92.0 828 1.8849 0.6225
0.0257 93.0 837 1.9021 0.6160
0.0288 94.0 846 1.9284 0.6147
0.0241 95.0 855 1.9315 0.6107
0.0237 96.0 864 1.9235 0.5976
0.0231 97.0 873 1.9428 0.6225
0.0217 98.0 882 1.9266 0.6199
0.0215 99.0 891 1.9238 0.6160
0.0207 100.0 900 1.9238 0.6121

Framework versions

  • Transformers 4.42.0.dev0
  • Pytorch 2.3.0
  • Datasets 2.19.2.dev0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
94.6M params
Tensor type
F32
·
Inference API
or
This model can be loaded on Inference API (serverless).

Finetuned from