ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_3

This model is a fine-tuned version of gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-13k_onset-drums_fold_2 on the GARY109/AI_LIGHT_DANCE - ONSET-DRUMS_FOLD_3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4093
  • Wer: 0.1250

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 50.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4557 1.0 70 0.5794 0.1197
0.6796 2.0 140 0.5726 0.1388
0.4511 3.0 210 0.6290 0.1242
0.609 4.0 280 0.7112 0.1187
0.4082 5.0 350 0.8275 0.1965
0.4638 6.0 420 0.4767 0.1524
0.4446 7.0 490 0.5091 0.1376
0.4337 8.0 560 0.6622 0.1170
0.4604 9.0 630 0.7242 0.1600
0.4462 10.0 700 0.7298 0.1383
0.4201 11.0 770 0.8058 0.1362
0.4204 12.0 840 0.6255 0.1099
0.461 13.0 910 0.5204 0.1109
0.3779 14.0 980 0.6911 0.1125
0.3403 15.0 1050 0.5863 0.1188
0.6223 16.0 1120 0.6367 0.1147
0.3827 17.0 1190 0.6266 0.1293
0.3055 18.0 1260 0.4866 0.1095
0.3917 19.0 1330 0.4093 0.1250
0.3912 20.0 1400 0.4514 0.1077
0.3861 21.0 1470 0.5043 0.1156
0.3659 22.0 1540 0.5680 0.1091
0.3536 23.0 1610 0.7940 0.1029
0.3559 24.0 1680 0.5877 0.1101
0.3274 25.0 1750 0.4461 0.1059
0.5232 26.0 1820 1.2051 0.1068
0.3241 27.0 1890 0.8716 0.1099
0.3169 28.0 1960 0.6752 0.1082
0.2938 29.0 2030 0.6023 0.1071
0.3022 30.0 2100 0.6122 0.1146
0.4245 31.0 2170 0.5735 0.1102
0.3095 32.0 2240 0.4476 0.1042
0.4062 33.0 2310 0.6339 0.1130
0.3202 34.0 2380 0.4101 0.1077
0.2952 35.0 2450 0.4825 0.1076
0.2945 36.0 2520 0.4998 0.1058
0.336 37.0 2590 0.5490 0.1061
0.2912 38.0 2660 0.4804 0.1038
0.282 39.0 2730 0.4776 0.1022
0.4359 40.0 2800 0.4376 0.1044
0.2698 41.0 2870 0.5609 0.1098
0.3004 42.0 2940 0.5258 0.1083
0.2873 43.0 3010 0.4810 0.1069
0.3413 44.0 3080 0.4961 0.1080
0.2802 45.0 3150 0.6850 0.1076
0.2584 46.0 3220 0.7210 0.1082
0.3282 47.0 3290 0.6179 0.1053
0.2666 48.0 3360 0.7673 0.1075
0.2989 49.0 3430 0.7710 0.1079
0.2676 50.0 3500 0.7655 0.1076

Framework versions

  • Transformers 4.24.0.dev0
  • Pytorch 1.12.1+cu113
  • Datasets 2.6.1
  • Tokenizers 0.13.1
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.