End of training
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
license: mit
|
4 |
-
base_model:
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
model-index:
|
@@ -14,14 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
14 |
|
15 |
# Sindhi-TTS
|
16 |
|
17 |
-
This model is a fine-tuned version of [
|
18 |
-
It achieves the following results on the evaluation set:
|
19 |
-
- eval_loss: 0.5724
|
20 |
-
- eval_runtime: 28.952
|
21 |
-
- eval_samples_per_second: 61.55
|
22 |
-
- eval_steps_per_second: 1.313
|
23 |
-
- epoch: 2.9940
|
24 |
-
- step: 500
|
25 |
|
26 |
## Model description
|
27 |
|
@@ -41,15 +34,15 @@ More information needed
|
|
41 |
|
42 |
The following hyperparameters were used during training:
|
43 |
- learning_rate: 0.0001
|
44 |
-
- train_batch_size:
|
45 |
- eval_batch_size: 48
|
46 |
- seed: 42
|
47 |
-
- gradient_accumulation_steps:
|
48 |
-
- total_train_batch_size:
|
49 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
50 |
- lr_scheduler_type: linear
|
51 |
- lr_scheduler_warmup_steps: 100
|
52 |
-
- training_steps:
|
53 |
- mixed_precision_training: Native AMP
|
54 |
|
55 |
### Framework versions
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
license: mit
|
4 |
+
base_model: fahadqazi/Sindhi-TTS
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
model-index:
|
|
|
14 |
|
15 |
# Sindhi-TTS
|
16 |
|
17 |
+
This model is a fine-tuned version of [fahadqazi/Sindhi-TTS](https://huggingface.co/fahadqazi/Sindhi-TTS) on the None dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
## Model description
|
20 |
|
|
|
34 |
|
35 |
The following hyperparameters were used during training:
|
36 |
- learning_rate: 0.0001
|
37 |
+
- train_batch_size: 64
|
38 |
- eval_batch_size: 48
|
39 |
- seed: 42
|
40 |
+
- gradient_accumulation_steps: 4
|
41 |
+
- total_train_batch_size: 256
|
42 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
43 |
- lr_scheduler_type: linear
|
44 |
- lr_scheduler_warmup_steps: 100
|
45 |
+
- training_steps: 5000
|
46 |
- mixed_precision_training: Native AMP
|
47 |
|
48 |
### Framework versions
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 617574792
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:259dbe7371cadc0d46c31088e794dbbaa2618019632ee7bfe130ec7b9a8fd3ff
|
3 |
size 617574792
|
runs/Nov14_23-40-43_6aa7ecbbcfb5/events.out.tfevents.1731627648.6aa7ecbbcfb5.1822.2
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:012df0f58f20a5f6d57e3e311f159535be3329d243cad10c324c1231a5a43c2c
|
3 |
+
size 8343
|
runs/Nov14_23-45-14_6aa7ecbbcfb5/events.out.tfevents.1731627940.6aa7ecbbcfb5.1822.3
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b1dca5057dc22551593165b2aab2a0575d89cf09ecaea958a16786f2207686b5
|
3 |
+
size 7032
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5432
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ee4e069e5337785281cccfc39e3b386198e086922e70768c0c6e1bc5fc2c7014
|
3 |
size 5432
|