gary109 commited on
Commit
0e4a2d2
1 Parent(s): 725d9d9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +176 -0
README.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - ai_light_dance
6
+ metrics:
7
+ - wer
8
+ model-index:
9
+ - name: ai-light-dance_drums_ft_pretrain_wav2vec2-base-new_onset-idmt-2
10
+ results:
11
+ - task:
12
+ name: Automatic Speech Recognition
13
+ type: automatic-speech-recognition
14
+ dataset:
15
+ name: ai_light_dance
16
+ type: ai_light_dance
17
+ config: onset-idmt-2
18
+ split: train
19
+ args: onset-idmt-2
20
+ metrics:
21
+ - name: Wer
22
+ type: wer
23
+ value: 0.26
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # ai-light-dance_drums_ft_pretrain_wav2vec2-base-new_onset-idmt-2
30
+
31
+ This model is a fine-tuned version of [gary109/ai-light-dance_drums_pretrain_wav2vec2-base](https://huggingface.co/gary109/ai-light-dance_drums_pretrain_wav2vec2-base) on the ai_light_dance dataset.
32
+ It achieves the following results on the evaluation set:
33
+ - Loss: 0.5174
34
+ - Wer: 0.26
35
+
36
+ ## Model description
37
+
38
+ More information needed
39
+
40
+ ## Intended uses & limitations
41
+
42
+ More information needed
43
+
44
+ ## Training and evaluation data
45
+
46
+ More information needed
47
+
48
+ ## Training procedure
49
+
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - learning_rate: 0.0003
54
+ - train_batch_size: 4
55
+ - eval_batch_size: 4
56
+ - seed: 42
57
+ - gradient_accumulation_steps: 4
58
+ - total_train_batch_size: 16
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - lr_scheduler_warmup_steps: 30
62
+ - num_epochs: 100.0
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
68
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
69
+ | No log | 1.0 | 9 | 97.9319 | 1.0 |
70
+ | 17.1836 | 2.0 | 18 | 45.7229 | 1.0 |
71
+ | 13.2869 | 3.0 | 27 | 2.7579 | 1.0 |
72
+ | 2.6495 | 4.0 | 36 | 2.7427 | 1.0 |
73
+ | 1.7135 | 5.0 | 45 | 2.5477 | 1.0 |
74
+ | 1.4609 | 6.0 | 54 | 1.7126 | 1.0 |
75
+ | 1.374 | 7.0 | 63 | 1.3668 | 0.9967 |
76
+ | 1.2951 | 8.0 | 72 | 1.1274 | 0.9867 |
77
+ | 1.0493 | 9.0 | 81 | 0.7346 | 0.5178 |
78
+ | 0.8835 | 10.0 | 90 | 0.7664 | 0.4122 |
79
+ | 0.8835 | 11.0 | 99 | 0.5438 | 0.3867 |
80
+ | 0.7019 | 12.0 | 108 | 0.4876 | 0.3711 |
81
+ | 0.6906 | 13.0 | 117 | 0.5194 | 0.36 |
82
+ | 0.6535 | 14.0 | 126 | 0.4489 | 0.3556 |
83
+ | 0.6225 | 15.0 | 135 | 0.4383 | 0.3333 |
84
+ | 0.547 | 16.0 | 144 | 0.4521 | 0.3556 |
85
+ | 0.5525 | 17.0 | 153 | 0.5476 | 0.3344 |
86
+ | 0.6152 | 18.0 | 162 | 0.4466 | 0.36 |
87
+ | 0.5055 | 19.0 | 171 | 0.3981 | 0.3256 |
88
+ | 0.5204 | 20.0 | 180 | 0.4924 | 0.3078 |
89
+ | 0.5204 | 21.0 | 189 | 0.4085 | 0.32 |
90
+ | 0.4742 | 22.0 | 198 | 0.4255 | 0.3233 |
91
+ | 0.4774 | 23.0 | 207 | 0.4321 | 0.2889 |
92
+ | 0.5029 | 24.0 | 216 | 0.4412 | 0.3167 |
93
+ | 0.4889 | 25.0 | 225 | 0.4051 | 0.3044 |
94
+ | 0.4446 | 26.0 | 234 | 0.3918 | 0.3089 |
95
+ | 0.4255 | 27.0 | 243 | 0.4039 | 0.2956 |
96
+ | 0.4396 | 28.0 | 252 | 0.4113 | 0.2956 |
97
+ | 0.4265 | 29.0 | 261 | 0.5576 | 0.3022 |
98
+ | 0.4289 | 30.0 | 270 | 0.3558 | 0.3078 |
99
+ | 0.4289 | 31.0 | 279 | 0.3390 | 0.3167 |
100
+ | 0.3817 | 32.0 | 288 | 0.3739 | 0.3422 |
101
+ | 0.4192 | 33.0 | 297 | 0.3179 | 0.3056 |
102
+ | 0.3719 | 34.0 | 306 | 0.3622 | 0.3033 |
103
+ | 0.3685 | 35.0 | 315 | 0.4057 | 0.3256 |
104
+ | 0.3752 | 36.0 | 324 | 0.3950 | 0.31 |
105
+ | 0.378 | 37.0 | 333 | 0.3907 | 0.3567 |
106
+ | 0.4438 | 38.0 | 342 | 0.3376 | 0.31 |
107
+ | 0.3978 | 39.0 | 351 | 0.3395 | 0.2833 |
108
+ | 0.3639 | 40.0 | 360 | 0.3646 | 0.2856 |
109
+ | 0.3639 | 41.0 | 369 | 0.3546 | 0.3044 |
110
+ | 0.3535 | 42.0 | 378 | 0.3699 | 0.2889 |
111
+ | 0.3311 | 43.0 | 387 | 0.3882 | 0.3022 |
112
+ | 0.3475 | 44.0 | 396 | 0.4749 | 0.2889 |
113
+ | 0.4048 | 45.0 | 405 | 0.3437 | 0.2911 |
114
+ | 0.2984 | 46.0 | 414 | 0.3664 | 0.27 |
115
+ | 0.3535 | 47.0 | 423 | 0.3291 | 0.2889 |
116
+ | 0.3015 | 48.0 | 432 | 0.3538 | 0.2767 |
117
+ | 0.3628 | 49.0 | 441 | 0.4411 | 0.2733 |
118
+ | 0.3303 | 50.0 | 450 | 0.3425 | 0.29 |
119
+ | 0.3303 | 51.0 | 459 | 0.3162 | 0.3011 |
120
+ | 0.271 | 52.0 | 468 | 0.3685 | 0.2933 |
121
+ | 0.3299 | 53.0 | 477 | 0.4216 | 0.2933 |
122
+ | 0.2782 | 54.0 | 486 | 0.4713 | 0.3044 |
123
+ | 0.348 | 55.0 | 495 | 0.4310 | 0.3078 |
124
+ | 0.2969 | 56.0 | 504 | 0.4898 | 0.2767 |
125
+ | 0.2757 | 57.0 | 513 | 0.5195 | 0.2789 |
126
+ | 0.2662 | 58.0 | 522 | 0.4631 | 0.2911 |
127
+ | 0.2706 | 59.0 | 531 | 0.4275 | 0.2833 |
128
+ | 0.2684 | 60.0 | 540 | 0.5535 | 0.2789 |
129
+ | 0.2684 | 61.0 | 549 | 0.4733 | 0.2978 |
130
+ | 0.2819 | 62.0 | 558 | 0.4969 | 0.2833 |
131
+ | 0.2819 | 63.0 | 567 | 0.6202 | 0.2789 |
132
+ | 0.2889 | 64.0 | 576 | 0.3955 | 0.2733 |
133
+ | 0.2515 | 65.0 | 585 | 0.3806 | 0.2656 |
134
+ | 0.2468 | 66.0 | 594 | 0.3473 | 0.2722 |
135
+ | 0.2557 | 67.0 | 603 | 0.4170 | 0.2722 |
136
+ | 0.2477 | 68.0 | 612 | 0.4749 | 0.2678 |
137
+ | 0.2965 | 69.0 | 621 | 0.4387 | 0.2611 |
138
+ | 0.2606 | 70.0 | 630 | 0.4586 | 0.2656 |
139
+ | 0.2606 | 71.0 | 639 | 0.5755 | 0.2733 |
140
+ | 0.2442 | 72.0 | 648 | 0.5582 | 0.2656 |
141
+ | 0.347 | 73.0 | 657 | 0.3897 | 0.2711 |
142
+ | 0.2444 | 74.0 | 666 | 0.3369 | 0.2533 |
143
+ | 0.2811 | 75.0 | 675 | 0.3487 | 0.2578 |
144
+ | 0.24 | 76.0 | 684 | 0.3692 | 0.2589 |
145
+ | 0.2466 | 77.0 | 693 | 0.4567 | 0.2578 |
146
+ | 0.2769 | 78.0 | 702 | 0.4041 | 0.2633 |
147
+ | 0.2464 | 79.0 | 711 | 0.3813 | 0.2622 |
148
+ | 0.2791 | 80.0 | 720 | 0.3990 | 0.2556 |
149
+ | 0.2791 | 81.0 | 729 | 0.3997 | 0.2489 |
150
+ | 0.2365 | 82.0 | 738 | 0.4537 | 0.2533 |
151
+ | 0.2693 | 83.0 | 747 | 0.5943 | 0.2611 |
152
+ | 0.2285 | 84.0 | 756 | 0.5805 | 0.2656 |
153
+ | 0.2468 | 85.0 | 765 | 0.5609 | 0.2656 |
154
+ | 0.2226 | 86.0 | 774 | 0.5948 | 0.2667 |
155
+ | 0.2419 | 87.0 | 783 | 0.5910 | 0.2544 |
156
+ | 0.2254 | 88.0 | 792 | 0.5741 | 0.26 |
157
+ | 0.2083 | 89.0 | 801 | 0.4984 | 0.2611 |
158
+ | 0.2318 | 90.0 | 810 | 0.5093 | 0.26 |
159
+ | 0.2318 | 91.0 | 819 | 0.5284 | 0.2633 |
160
+ | 0.2458 | 92.0 | 828 | 0.4885 | 0.2656 |
161
+ | 0.2394 | 93.0 | 837 | 0.4818 | 0.2622 |
162
+ | 0.2018 | 94.0 | 846 | 0.5037 | 0.26 |
163
+ | 0.235 | 95.0 | 855 | 0.5011 | 0.2578 |
164
+ | 0.2252 | 96.0 | 864 | 0.4931 | 0.2611 |
165
+ | 0.2147 | 97.0 | 873 | 0.4881 | 0.2589 |
166
+ | 0.2227 | 98.0 | 882 | 0.4956 | 0.2589 |
167
+ | 0.2168 | 99.0 | 891 | 0.5097 | 0.2589 |
168
+ | 0.2282 | 100.0 | 900 | 0.5174 | 0.26 |
169
+
170
+
171
+ ### Framework versions
172
+
173
+ - Transformers 4.25.0.dev0
174
+ - Pytorch 1.8.1+cu111
175
+ - Datasets 2.7.1.dev0
176
+ - Tokenizers 0.13.2