catsOfpeople commited on
Commit
211a63a
1 Parent(s): 50c4ed1

End of training

Browse files
Files changed (2) hide show
  1. README.md +96 -0
  2. generation_config.json +9 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: catsOfpeople/speecht5_finetuned_emirhan_soomea
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: speecht5_soome-V2
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # speecht5_soome-V2
16
+
17
+ This model is a fine-tuned version of [catsOfpeople/speecht5_finetuned_emirhan_soomea](https://huggingface.co/catsOfpeople/speecht5_finetuned_emirhan_soomea) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.2695
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0001
39
+ - train_batch_size: 16
40
+ - eval_batch_size: 2
41
+ - seed: 42
42
+ - gradient_accumulation_steps: 8
43
+ - total_train_batch_size: 128
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 100
47
+ - training_steps: 3500
48
+ - mixed_precision_training: Native AMP
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:--------:|:----:|:---------------:|
54
+ | 0.9648 | 4.5198 | 100 | 0.4308 |
55
+ | 0.4495 | 9.0395 | 200 | 0.3583 |
56
+ | 0.384 | 13.5593 | 300 | 0.3418 |
57
+ | 0.3637 | 18.0791 | 400 | 0.3177 |
58
+ | 0.3443 | 22.5989 | 500 | 0.3119 |
59
+ | 0.3366 | 27.1186 | 600 | 0.3099 |
60
+ | 0.3328 | 31.6384 | 700 | 0.3222 |
61
+ | 0.3238 | 36.1582 | 800 | 0.3091 |
62
+ | 0.3196 | 40.6780 | 900 | 0.2960 |
63
+ | 0.3156 | 45.1977 | 1000 | 0.2977 |
64
+ | 0.3123 | 49.7175 | 1100 | 0.2960 |
65
+ | 0.3107 | 54.2373 | 1200 | 0.2904 |
66
+ | 0.3029 | 58.7571 | 1300 | 0.2891 |
67
+ | 0.2978 | 63.2768 | 1400 | 0.2904 |
68
+ | 0.3012 | 67.7966 | 1500 | 0.2855 |
69
+ | 0.2977 | 72.3164 | 1600 | 0.2863 |
70
+ | 0.2915 | 76.8362 | 1700 | 0.2855 |
71
+ | 0.2935 | 81.3559 | 1800 | 0.2853 |
72
+ | 0.2877 | 85.8757 | 1900 | 0.2794 |
73
+ | 0.2839 | 90.3955 | 2000 | 0.2820 |
74
+ | 0.2847 | 94.9153 | 2100 | 0.2781 |
75
+ | 0.2831 | 99.4350 | 2200 | 0.2799 |
76
+ | 0.283 | 103.9548 | 2300 | 0.2811 |
77
+ | 0.2792 | 108.4746 | 2400 | 0.2774 |
78
+ | 0.2788 | 112.9944 | 2500 | 0.2813 |
79
+ | 0.2793 | 117.5141 | 2600 | 0.2755 |
80
+ | 0.2746 | 122.0339 | 2700 | 0.2769 |
81
+ | 0.2735 | 126.5537 | 2800 | 0.2729 |
82
+ | 0.2728 | 131.0734 | 2900 | 0.2764 |
83
+ | 0.2735 | 135.5932 | 3000 | 0.2751 |
84
+ | 0.2726 | 140.1130 | 3100 | 0.2754 |
85
+ | 0.2691 | 144.6328 | 3200 | 0.2707 |
86
+ | 0.2711 | 149.1525 | 3300 | 0.2717 |
87
+ | 0.2679 | 153.6723 | 3400 | 0.2724 |
88
+ | 0.2665 | 158.1921 | 3500 | 0.2695 |
89
+
90
+
91
+ ### Framework versions
92
+
93
+ - Transformers 4.44.2
94
+ - Pytorch 2.4.1+cu121
95
+ - Datasets 3.0.1
96
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "decoder_start_token_id": 2,
5
+ "eos_token_id": 2,
6
+ "max_length": 1876,
7
+ "pad_token_id": 1,
8
+ "transformers_version": "4.44.2"
9
+ }