Edit model card

hubert-base-ls960-finetuned-gtzan

This model is a fine-tuned version of facebook/hubert-base-ls960 on the GTZAN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6645
  • Accuracy: 0.88

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.2685 1.0 56 2.2069 0.44
2.0208 1.99 112 1.8352 0.46
1.7603 2.99 168 1.5275 0.49
1.4843 4.0 225 1.4296 0.52
1.347 5.0 281 1.2222 0.52
1.2364 5.99 337 1.1477 0.62
1.2082 6.99 393 1.0181 0.67
0.9861 8.0 450 0.9598 0.71
0.752 9.0 506 0.7499 0.77
1.006 9.99 562 0.8190 0.79
0.6725 10.99 618 0.8798 0.75
0.7457 12.0 675 0.6276 0.81
0.4605 13.0 731 0.6086 0.85
0.5751 13.99 787 0.6894 0.75
0.4886 14.99 843 0.6109 0.83
0.2429 16.0 900 0.6076 0.85
0.3084 17.0 956 0.4646 0.86
0.3762 17.99 1012 0.8349 0.81
0.2897 18.99 1068 0.4509 0.89
0.1296 20.0 1125 0.6791 0.86
0.1291 21.0 1181 0.6466 0.85
0.3784 21.99 1237 0.6272 0.88
0.1156 22.99 1293 0.7916 0.85
0.2093 24.0 1350 0.6536 0.85
0.2167 25.0 1406 0.7050 0.87
0.1095 25.99 1462 0.6128 0.88
0.1004 26.99 1518 0.6092 0.89
0.0897 28.0 1575 0.6730 0.88
0.083 29.0 1631 0.6396 0.89
0.0343 29.87 1680 0.6645 0.88

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 1.13.1+cu116
  • Datasets 2.14.1
  • Tokenizers 0.13.3
Downloads last month
1
Inference API
or
This model can be loaded on Inference API (serverless).

Finetuned from

Dataset used to train c72599/hubert-base-ls960-finetuned-gtzan

Evaluation results