bilalfaye commited on
Commit
8293fd5
·
verified ·
1 Parent(s): 25034c6

End of training

Browse files
Files changed (2) hide show
  1. README.md +31 -30
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: mit
3
- base_model: microsoft/speecht5_tts
4
  tags:
5
  - generated_from_trainer
6
  model-index:
@@ -13,9 +13,9 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # speecht5_tts-wolof-v0.2
15
 
16
- This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.3924
19
 
20
  ## Model description
21
 
@@ -35,11 +35,11 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
- - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - gradient_accumulation_steps: 2
42
- - total_train_batch_size: 16
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 500
@@ -50,31 +50,32 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-------:|:-----:|:---------------:|
53
- | 0.5083 | 0.9997 | 1908 | 0.4490 |
54
- | 0.4789 | 2.0 | 3817 | 0.4399 |
55
- | 0.4684 | 2.9997 | 5725 | 0.4297 |
56
- | 0.4549 | 4.0 | 7634 | 0.4173 |
57
- | 0.4448 | 4.9997 | 9542 | 0.4123 |
58
- | 0.443 | 6.0 | 11451 | 0.4080 |
59
- | 0.4368 | 6.9997 | 13359 | 0.4059 |
60
- | 0.4351 | 8.0 | 15268 | 0.4030 |
61
- | 0.4319 | 8.9997 | 17176 | 0.4027 |
62
- | 0.4298 | 10.0 | 19085 | 0.4005 |
63
- | 0.4286 | 10.9997 | 20993 | 0.3996 |
64
- | 0.428 | 12.0 | 22902 | 0.3989 |
65
- | 0.4251 | 12.9997 | 24810 | 0.3962 |
66
- | 0.4257 | 14.0 | 26719 | 0.3971 |
67
- | 0.4213 | 14.9997 | 28627 | 0.3956 |
68
- | 0.4245 | 16.0 | 30536 | 0.3949 |
69
- | 0.4186 | 16.9997 | 32444 | 0.3950 |
70
- | 0.4213 | 18.0 | 34353 | 0.3948 |
71
- | 0.4179 | 18.9997 | 36261 | 0.3943 |
72
- | 0.4177 | 20.0 | 38170 | 0.3952 |
73
- | 0.416 | 20.9997 | 40078 | 0.3932 |
74
- | 0.4167 | 22.0 | 41987 | 0.3921 |
75
- | 0.4148 | 22.9997 | 43895 | 0.3935 |
76
- | 0.4133 | 24.0 | 45804 | 0.3938 |
77
- | 0.4169 | 24.9997 | 47712 | 0.3924 |
 
78
 
79
 
80
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: bilalfaye/speecht5_tts-wolof
4
  tags:
5
  - generated_from_trainer
6
  model-index:
 
13
 
14
  # speecht5_tts-wolof-v0.2
15
 
16
+ This model is a fine-tuned version of [bilalfaye/speecht5_tts-wolof](https://huggingface.co/bilalfaye/speecht5_tts-wolof) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.3938
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
+ - train_batch_size: 16
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - gradient_accumulation_steps: 2
42
+ - total_train_batch_size: 32
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 500
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-------:|:-----:|:---------------:|
53
+ | 0.5372 | 0.9995 | 954 | 0.4398 |
54
+ | 0.4646 | 2.0 | 1909 | 0.4214 |
55
+ | 0.4505 | 2.9995 | 2863 | 0.4163 |
56
+ | 0.4443 | 4.0 | 3818 | 0.4109 |
57
+ | 0.4403 | 4.9995 | 4772 | 0.4080 |
58
+ | 0.4368 | 6.0 | 5727 | 0.4057 |
59
+ | 0.4343 | 6.9995 | 6681 | 0.4034 |
60
+ | 0.4315 | 8.0 | 7636 | 0.4018 |
61
+ | 0.4311 | 8.9995 | 8590 | 0.4015 |
62
+ | 0.4273 | 10.0 | 9545 | 0.4017 |
63
+ | 0.4282 | 10.9995 | 10499 | 0.3990 |
64
+ | 0.4249 | 12.0 | 11454 | 0.3986 |
65
+ | 0.4242 | 12.9995 | 12408 | 0.3973 |
66
+ | 0.4225 | 14.0 | 13363 | 0.3966 |
67
+ | 0.4217 | 14.9995 | 14317 | 0.3951 |
68
+ | 0.4208 | 16.0 | 15272 | 0.3950 |
69
+ | 0.42 | 16.9995 | 16226 | 0.3950 |
70
+ | 0.4202 | 18.0 | 17181 | 0.3952 |
71
+ | 0.42 | 18.9995 | 18135 | 0.3943 |
72
+ | 0.4183 | 20.0 | 19090 | 0.3962 |
73
+ | 0.4175 | 20.9995 | 20044 | 0.3937 |
74
+ | 0.4161 | 22.0 | 20999 | 0.3940 |
75
+ | 0.4193 | 22.9995 | 21953 | 0.3932 |
76
+ | 0.4177 | 24.0 | 22908 | 0.3939 |
77
+ | 0.4166 | 24.9995 | 23862 | 0.3936 |
78
+ | 0.4156 | 26.0 | 24817 | 0.3938 |
79
 
80
 
81
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e1282f5269491592abaa653da75e61ff17b0c2fb0ddfc3014098ffdc90b850b2
3
  size 578019720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d14808a427c75d92604b34a9244dff829a21cbccce1c1236f3ba1153c749fec2
3
  size 578019720