Chan-Y commited on
Commit
ba90b19
1 Parent(s): b632e30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -58
README.md CHANGED
@@ -1,58 +1,62 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: speecht5_finetuned_tr_commonvoice
7
- results: []
8
- ---
9
-
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
- # speecht5_finetuned_tr_commonvoice
14
-
15
- This model was trained from scratch on the None dataset.
16
- It achieves the following results on the evaluation set:
17
- - eval_loss: 0.5179
18
- - eval_runtime: 361.0936
19
- - eval_samples_per_second: 32.161
20
- - eval_steps_per_second: 16.082
21
- - epoch: 1.6783
22
- - step: 2000
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 1e-05
42
- - train_batch_size: 4
43
- - eval_batch_size: 2
44
- - seed: 42
45
- - gradient_accumulation_steps: 8
46
- - total_train_batch_size: 32
47
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
- - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_steps: 500
50
- - training_steps: 4000
51
- - mixed_precision_training: Native AMP
52
-
53
- ### Framework versions
54
-
55
- - Transformers 4.46.3
56
- - Pytorch 2.5.1+cu124
57
- - Datasets 3.1.0
58
- - Tokenizers 0.20.3
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: speecht5_finetuned_tr_commonvoice
7
+ results: []
8
+ language:
9
+ - tr
10
+ base_model:
11
+ - microsoft/speecht5_tts
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # speecht5_finetuned_tr_commonvoice
18
+
19
+ This model was trained from scratch on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - eval_loss: 0.5179
22
+ - eval_runtime: 361.0936
23
+ - eval_samples_per_second: 32.161
24
+ - eval_steps_per_second: 16.082
25
+ - epoch: 1.6783
26
+ - step: 2000
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 1e-05
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 2
48
+ - seed: 42
49
+ - gradient_accumulation_steps: 8
50
+ - total_train_batch_size: 32
51
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
+ - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_steps: 500
54
+ - training_steps: 4000
55
+ - mixed_precision_training: Native AMP
56
+
57
+ ### Framework versions
58
+
59
+ - Transformers 4.46.3
60
+ - Pytorch 2.5.1+cu124
61
+ - Datasets 3.1.0
62
+ - Tokenizers 0.20.3