Edit model card

SER_model_xapiens

This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0600
  • Accuracy: 0.7

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 1 2.6350 0.7167
No log 2.0 2 2.7406 0.7167
No log 3.0 3 2.8632 0.7
No log 4.0 4 2.8745 0.7
No log 5.0 5 2.9112 0.7
No log 6.0 6 2.9644 0.6833
No log 7.0 7 3.3638 0.65
No log 8.0 8 3.8127 0.5833
No log 9.0 9 4.2359 0.5833
0.0038 10.0 10 3.7159 0.6167
0.0038 11.0 11 3.2066 0.65
0.0038 12.0 12 2.9436 0.7333
0.0038 13.0 13 3.1083 0.7
0.0038 14.0 14 3.4327 0.6833
0.0038 15.0 15 3.4017 0.6833
0.0038 16.0 16 3.1088 0.7
0.0038 17.0 17 2.9279 0.7
0.0038 18.0 18 2.9036 0.7333
0.0038 19.0 19 2.9245 0.7167
0.0097 20.0 20 2.8291 0.7333
0.0097 21.0 21 2.8655 0.7
0.0097 22.0 22 2.7505 0.7167
0.0097 23.0 23 2.6406 0.7333
0.0097 24.0 24 2.6017 0.7333
0.0097 25.0 25 2.5550 0.75
0.0097 26.0 26 2.5972 0.75
0.0097 27.0 27 2.6193 0.7333
0.0097 28.0 28 2.6949 0.7167
0.0097 29.0 29 2.8405 0.6833
0.0052 30.0 30 2.8552 0.7
0.0052 31.0 31 2.7041 0.7
0.0052 32.0 32 2.5161 0.7333
0.0052 33.0 33 2.4609 0.7333
0.0052 34.0 34 2.4119 0.7333
0.0052 35.0 35 2.3811 0.75
0.0052 36.0 36 2.4323 0.75
0.0052 37.0 37 2.6068 0.75
0.0052 38.0 38 2.6425 0.75
0.0052 39.0 39 2.6534 0.75
0.004 40.0 40 2.6643 0.7333
0.004 41.0 41 2.6912 0.7167
0.004 42.0 42 2.7284 0.7
0.004 43.0 43 2.7621 0.7
0.004 44.0 44 2.7870 0.7
0.004 45.0 45 2.8085 0.7
0.004 46.0 46 2.8251 0.7
0.004 47.0 47 2.8478 0.7167
0.004 48.0 48 2.8729 0.7
0.004 49.0 49 2.8999 0.7
0.0001 50.0 50 2.9256 0.7
0.0001 51.0 51 2.9287 0.7
0.0001 52.0 52 2.8962 0.7
0.0001 53.0 53 2.8783 0.6833
0.0001 54.0 54 2.8959 0.7167
0.0001 55.0 55 2.8606 0.7167
0.0001 56.0 56 2.7337 0.7167
0.0001 57.0 57 2.7450 0.7
0.0001 58.0 58 2.9678 0.6667
0.0001 59.0 59 3.1560 0.6667
0.0145 60.0 60 3.4292 0.6333
0.0145 61.0 61 3.5474 0.6167
0.0145 62.0 62 3.6343 0.6167
0.0145 63.0 63 3.5805 0.6167
0.0145 64.0 64 3.5623 0.6333
0.0145 65.0 65 3.5532 0.6333
0.0145 66.0 66 3.6041 0.6
0.0145 67.0 67 3.7335 0.6
0.0145 68.0 68 3.8447 0.6
0.0145 69.0 69 3.8997 0.6167
0.0001 70.0 70 3.8381 0.6167
0.0001 71.0 71 3.7951 0.6
0.0001 72.0 72 3.8582 0.6
0.0001 73.0 73 3.8983 0.6
0.0001 74.0 74 3.8225 0.6167
0.0001 75.0 75 3.6746 0.65
0.0001 76.0 76 3.5636 0.65
0.0001 77.0 77 3.5198 0.6667
0.0001 78.0 78 3.4888 0.6667
0.0001 79.0 79 3.4752 0.65
0.004 80.0 80 3.4661 0.65
0.004 81.0 81 3.4177 0.65
0.004 82.0 82 3.3299 0.65
0.004 83.0 83 3.2658 0.6667
0.004 84.0 84 3.2386 0.6833
0.004 85.0 85 3.2177 0.7
0.004 86.0 86 3.1985 0.7
0.004 87.0 87 3.1800 0.7
0.004 88.0 88 3.1624 0.7
0.004 89.0 89 3.1451 0.7
0.0001 90.0 90 3.1292 0.7
0.0001 91.0 91 3.1145 0.7
0.0001 92.0 92 3.1022 0.7
0.0001 93.0 93 3.0917 0.7
0.0001 94.0 94 3.0817 0.7
0.0001 95.0 95 3.0753 0.7
0.0001 96.0 96 3.0691 0.7
0.0001 97.0 97 3.0658 0.7
0.0001 98.0 98 3.0623 0.7
0.0001 99.0 99 3.0608 0.7
0.0 100.0 100 3.0600 0.7

Framework versions

  • Transformers 4.42.0.dev0
  • Pytorch 2.3.0
  • Datasets 2.19.2.dev0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
94.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Nugrahasetyaardi/SER_model_xapiens

Finetuned
(612)
this model