divakaivan commited on
Commit
85afe1a
1 Parent(s): a0db5b7

End of training

Browse files
Files changed (3) hide show
  1. README.md +11 -15
  2. generation_config.json +1 -1
  3. model.safetensors +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ should probably proofread and complete it, then remove this comment. -->
22
 
23
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the glaswegian_tts_v0.1.0 dataset.
24
  It achieves the following results on the evaluation set:
25
- - Loss: 0.5090
26
 
27
  ## Model description
28
 
@@ -42,34 +42,30 @@ More information needed
42
 
43
  The following hyperparameters were used during training:
44
  - learning_rate: 1e-05
45
- - train_batch_size: 16
46
  - eval_batch_size: 8
47
  - seed: 42
48
- - gradient_accumulation_steps: 2
49
  - total_train_batch_size: 32
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 1000
53
- - training_steps: 8000
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss |
59
  |:-------------:|:--------:|:----:|:---------------:|
60
- | 0.4421 | 52.6316 | 1000 | 0.4186 |
61
- | 0.3878 | 105.2632 | 2000 | 0.4447 |
62
- | 0.3775 | 157.8947 | 3000 | 0.4671 |
63
- | 0.3639 | 210.5263 | 4000 | 0.4907 |
64
- | 0.354 | 263.1579 | 5000 | 0.4884 |
65
- | 0.356 | 315.7895 | 6000 | 0.4997 |
66
- | 0.3451 | 368.4211 | 7000 | 0.5021 |
67
- | 0.3514 | 421.0526 | 8000 | 0.5090 |
68
 
69
 
70
  ### Framework versions
71
 
72
- - Transformers 4.42.0.dev0
73
- - Pytorch 2.3.0+cu121
74
- - Datasets 2.19.2
75
  - Tokenizers 0.19.1
 
22
 
23
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the glaswegian_tts_v0.1.0 dataset.
24
  It achieves the following results on the evaluation set:
25
+ - Loss: 0.4605
26
 
27
  ## Model description
28
 
 
42
 
43
  The following hyperparameters were used during training:
44
  - learning_rate: 1e-05
45
+ - train_batch_size: 4
46
  - eval_batch_size: 8
47
  - seed: 42
48
+ - gradient_accumulation_steps: 8
49
  - total_train_batch_size: 32
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 1000
53
+ - training_steps: 4000
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss |
59
  |:-------------:|:--------:|:----:|:---------------:|
60
+ | 0.4699 | 35.2423 | 1000 | 0.4320 |
61
+ | 0.4246 | 70.4846 | 2000 | 0.4422 |
62
+ | 0.4115 | 105.7269 | 3000 | 0.4529 |
63
+ | 0.4127 | 140.9692 | 4000 | 0.4605 |
 
 
 
 
64
 
65
 
66
  ### Framework versions
67
 
68
+ - Transformers 4.44.0.dev0
69
+ - Pytorch 2.3.1+cu121
70
+ - Datasets 2.20.0
71
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.42.0.dev0"
9
  }
 
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.44.0.dev0"
9
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6ffc550c68a33e5a7c68309cc827e069dfb7567ae4b853ab00a8c811e321596d
3
  size 577789320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3d2bb3362f5042b714d838b4c4f0a560bf88d567037233414465dc8dedacb17
3
  size 577789320