Terps commited on
Commit
65cc8c3
·
1 Parent(s): 1d4731f

End of training

Browse files
Files changed (2) hide show
  1. README.md +9 -50
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -5,24 +5,9 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - marsyas/gtzan
8
- metrics:
9
- - accuracy
10
  model-index:
11
  - name: distilhubert-finetuned-gtzan
12
- results:
13
- - task:
14
- name: Audio Classification
15
- type: audio-classification
16
- dataset:
17
- name: GTZAN
18
- type: marsyas/gtzan
19
- config: all
20
- split: train
21
- args: all
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.86
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +17,13 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.5669
36
- - Accuracy: 0.86
 
 
 
 
 
37
 
38
  ## Model description
39
 
@@ -58,40 +48,9 @@ The following hyperparameters were used during training:
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - lr_scheduler_warmup_ratio: 0.2
62
  - num_epochs: 25
63
 
64
- ### Training results
65
-
66
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | 2.2857 | 1.0 | 113 | 2.2745 | 0.25 |
69
- | 2.1795 | 2.0 | 226 | 2.1382 | 0.47 |
70
- | 1.8958 | 3.0 | 339 | 1.8220 | 0.54 |
71
- | 1.6475 | 4.0 | 452 | 1.5569 | 0.65 |
72
- | 1.4246 | 5.0 | 565 | 1.3421 | 0.69 |
73
- | 1.0504 | 6.0 | 678 | 1.1615 | 0.7 |
74
- | 1.1759 | 7.0 | 791 | 1.0113 | 0.76 |
75
- | 0.8636 | 8.0 | 904 | 0.8411 | 0.75 |
76
- | 0.914 | 9.0 | 1017 | 0.7973 | 0.77 |
77
- | 0.5748 | 10.0 | 1130 | 0.8049 | 0.79 |
78
- | 0.4442 | 11.0 | 1243 | 0.7253 | 0.79 |
79
- | 0.4276 | 12.0 | 1356 | 0.6600 | 0.8 |
80
- | 0.3435 | 13.0 | 1469 | 0.5876 | 0.83 |
81
- | 0.2779 | 14.0 | 1582 | 0.6596 | 0.82 |
82
- | 0.2661 | 15.0 | 1695 | 0.5582 | 0.82 |
83
- | 0.179 | 16.0 | 1808 | 0.5933 | 0.8 |
84
- | 0.1559 | 17.0 | 1921 | 0.5518 | 0.8 |
85
- | 0.1914 | 18.0 | 2034 | 0.5229 | 0.82 |
86
- | 0.0899 | 19.0 | 2147 | 0.5910 | 0.85 |
87
- | 0.2234 | 20.0 | 2260 | 0.5277 | 0.86 |
88
- | 0.0578 | 21.0 | 2373 | 0.5493 | 0.84 |
89
- | 0.0488 | 22.0 | 2486 | 0.5698 | 0.85 |
90
- | 0.0322 | 23.0 | 2599 | 0.5713 | 0.86 |
91
- | 0.0331 | 24.0 | 2712 | 0.5747 | 0.85 |
92
- | 0.1019 | 25.0 | 2825 | 0.5669 | 0.86 |
93
-
94
-
95
  ### Framework versions
96
 
97
  - Transformers 4.34.1
 
5
  - generated_from_trainer
6
  datasets:
7
  - marsyas/gtzan
 
 
8
  model-index:
9
  - name: distilhubert-finetuned-gtzan
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 0.4710
21
+ - eval_accuracy: 0.87
22
+ - eval_runtime: 52.8306
23
+ - eval_samples_per_second: 1.893
24
+ - eval_steps_per_second: 0.246
25
+ - epoch: 16.0
26
+ - step: 1808
27
 
28
  ## Model description
29
 
 
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_ratio: 0.1
52
  - num_epochs: 25
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  ### Framework versions
55
 
56
  - Transformers 4.34.1
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6e5bb0fca5e36c162f6cc0ecf65c3c1064fe0dc0c74e5911255b6bc4f9a24858
3
  size 94783885
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29cc6bd93a8a890391801ac40d196137264246cef7ddd939dfa6d07f242dd668
3
  size 94783885