pravin96 commited on
Commit
8542546
1 Parent(s): 0ee02af

Model save

Browse files
Files changed (1) hide show
  1. README.md +3 -24
README.md CHANGED
@@ -6,24 +6,9 @@ tags:
6
  - generated_from_trainer
7
  datasets:
8
  - generator
9
- metrics:
10
- - wer
11
  model-index:
12
  - name: distil_whisper_en
13
- results:
14
- - task:
15
- name: Automatic Speech Recognition
16
- type: automatic-speech-recognition
17
- dataset:
18
- name: generator
19
- type: generator
20
- config: default
21
- split: train
22
- args: default
23
- metrics:
24
- - name: Wer
25
- type: wer
26
- value: 0.8298755186721992
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,9 +17,6 @@ should probably proofread and complete it, then remove this comment. -->
32
  # distil_whisper_en
33
 
34
  This model is a fine-tuned version of [distil-whisper/distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) on the generator dataset.
35
- It achieves the following results on the evaluation set:
36
- - Loss: 0.0000
37
- - Wer: 0.8299
38
 
39
  ## Model description
40
 
@@ -61,15 +43,12 @@ The following hyperparameters were used during training:
61
  - total_train_batch_size: 4
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
- - lr_scheduler_warmup_steps: 500
65
- - training_steps: 500
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
- | Training Loss | Epoch | Step | Validation Loss | Wer |
71
- |:-------------:|:------:|:----:|:---------------:|:------:|
72
- | 0.0 | 19.031 | 500 | 0.0000 | 0.8299 |
73
 
74
 
75
  ### Framework versions
 
6
  - generated_from_trainer
7
  datasets:
8
  - generator
 
 
9
  model-index:
10
  - name: distil_whisper_en
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
  # distil_whisper_en
18
 
19
  This model is a fine-tuned version of [distil-whisper/distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) on the generator dataset.
 
 
 
20
 
21
  ## Model description
22
 
 
43
  - total_train_batch_size: 4
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 100
47
+ - training_steps: 100
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
 
 
 
52
 
53
 
54
  ### Framework versions