Update README.md
Browse files
README.md
CHANGED
@@ -41,9 +41,6 @@ It achieves the following results on the evaluation set:
|
|
41 |
|
42 |
More information needed
|
43 |
|
44 |
-
## Intended uses & limitations
|
45 |
-
|
46 |
-
More information needed
|
47 |
|
48 |
## Training and evaluation data
|
49 |
|
@@ -65,6 +62,35 @@ The following hyperparameters were used during training:
|
|
65 |
- training_steps: 4000
|
66 |
- mixed_precision_training: Native AMP
|
67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
### Training results
|
69 |
|
70 |
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|
|
|
41 |
|
42 |
More information needed
|
43 |
|
|
|
|
|
|
|
44 |
|
45 |
## Training and evaluation data
|
46 |
|
|
|
62 |
- training_steps: 4000
|
63 |
- mixed_precision_training: Native AMP
|
64 |
|
65 |
+
``` %python
|
66 |
+
from transformers import Seq2SeqTrainingArguments
|
67 |
+
|
68 |
+
training_args = Seq2SeqTrainingArguments(
|
69 |
+
output_dir="./whisper-small-da",
|
70 |
+
per_device_train_batch_size=16,
|
71 |
+
gradient_accumulation_steps=1,
|
72 |
+
learning_rate=1e-5,
|
73 |
+
lr_scheduler_type="linear",
|
74 |
+
warmup_steps=50,
|
75 |
+
max_steps=4000,
|
76 |
+
gradient_checkpointing=True,
|
77 |
+
fp16=True,
|
78 |
+
fp16_full_eval=True,
|
79 |
+
evaluation_strategy="steps",
|
80 |
+
per_device_eval_batch_size=16,
|
81 |
+
predict_with_generate=True,
|
82 |
+
generation_max_length=225,
|
83 |
+
save_steps=500,
|
84 |
+
eval_steps=500,
|
85 |
+
logging_steps=25,
|
86 |
+
report_to=["tensorboard"],
|
87 |
+
load_best_model_at_end=True,
|
88 |
+
metric_for_best_model="wer",
|
89 |
+
greater_is_better=False,
|
90 |
+
push_to_hub=True,
|
91 |
+
)
|
92 |
+
```
|
93 |
+
|
94 |
### Training results
|
95 |
|
96 |
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|