File size: 4,785 Bytes
d392af9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
612ed60
d392af9
 
 
 
 
 
 
 
 
612ed60
 
d392af9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ff896b
8b45bf4
 
d392af9
59352e2
612ed60
 
0723450
d392af9
 
 
 
 
 
612ed60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d392af9
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
  results:
  - task:
      name: Audio Classification
      type: audio-classification
    dataset:
      name: GTZAN
      type: marsyas/gtzan
      config: all
      split: train
      args: all
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.815
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# distilhubert-finetuned-gtzan

This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2091
- Accuracy: 0.815

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 50
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2632        | 1.0   | 67   | 2.2116          | 0.335    |
| 1.8978        | 2.0   | 134  | 1.8129          | 0.5      |
| 1.5811        | 3.0   | 201  | 1.4946          | 0.66     |
| 1.1795        | 4.0   | 268  | 1.2851          | 0.65     |
| 1.0256        | 5.0   | 335  | 1.1538          | 0.66     |
| 0.9168        | 6.0   | 402  | 1.0270          | 0.69     |
| 0.9383        | 7.0   | 469  | 0.9349          | 0.73     |
| 0.5988        | 8.0   | 536  | 0.8443          | 0.795    |
| 0.4844        | 9.0   | 603  | 0.8053          | 0.775    |
| 0.422         | 10.0  | 670  | 0.7710          | 0.785    |
| 0.2138        | 11.0  | 737  | 0.7353          | 0.8      |
| 0.1834        | 12.0  | 804  | 0.8303          | 0.78     |
| 0.1789        | 13.0  | 871  | 0.7801          | 0.805    |
| 0.1649        | 14.0  | 938  | 0.8433          | 0.775    |
| 0.0259        | 15.0  | 1005 | 0.7846          | 0.8      |
| 0.0825        | 16.0  | 1072 | 0.9268          | 0.795    |
| 0.0091        | 17.0  | 1139 | 1.0432          | 0.795    |
| 0.0053        | 18.0  | 1206 | 0.9703          | 0.8      |
| 0.0038        | 19.0  | 1273 | 0.9689          | 0.82     |
| 0.0246        | 20.0  | 1340 | 1.0611          | 0.81     |
| 0.0023        | 21.0  | 1407 | 1.0502          | 0.82     |
| 0.0023        | 22.0  | 1474 | 1.0703          | 0.815    |
| 0.0016        | 23.0  | 1541 | 1.0911          | 0.825    |
| 0.0015        | 24.0  | 1608 | 1.1375          | 0.795    |
| 0.0013        | 25.0  | 1675 | 1.1529          | 0.815    |
| 0.0172        | 26.0  | 1742 | 1.1258          | 0.815    |
| 0.0011        | 27.0  | 1809 | 1.1206          | 0.82     |
| 0.001         | 28.0  | 1876 | 1.1492          | 0.82     |
| 0.0009        | 29.0  | 1943 | 1.1490          | 0.815    |
| 0.0008        | 30.0  | 2010 | 1.1527          | 0.815    |
| 0.0008        | 31.0  | 2077 | 1.2008          | 0.815    |
| 0.0638        | 32.0  | 2144 | 1.1685          | 0.815    |
| 0.0007        | 33.0  | 2211 | 1.1749          | 0.815    |
| 0.0858        | 34.0  | 2278 | 1.1683          | 0.815    |
| 0.0006        | 35.0  | 2345 | 1.1772          | 0.815    |
| 0.0007        | 36.0  | 2412 | 1.1801          | 0.815    |
| 0.0006        | 37.0  | 2479 | 1.1956          | 0.815    |
| 0.0006        | 38.0  | 2546 | 1.1937          | 0.815    |
| 0.0055        | 39.0  | 2613 | 1.2110          | 0.82     |
| 0.0006        | 40.0  | 2680 | 1.2023          | 0.815    |
| 0.0006        | 41.0  | 2747 | 1.2093          | 0.815    |
| 0.001         | 42.0  | 2814 | 1.2075          | 0.815    |
| 0.0006        | 43.0  | 2881 | 1.2079          | 0.815    |
| 0.0662        | 44.0  | 2948 | 1.2054          | 0.815    |
| 0.0006        | 45.0  | 3015 | 1.2066          | 0.815    |
| 0.0006        | 46.0  | 3082 | 1.2089          | 0.815    |
| 0.0006        | 47.0  | 3149 | 1.2093          | 0.815    |
| 0.0005        | 48.0  | 3216 | 1.2096          | 0.815    |
| 0.0005        | 49.0  | 3283 | 1.2094          | 0.815    |
| 0.0006        | 50.0  | 3350 | 1.2091          | 0.815    |


### Framework versions

- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0