lnxdx commited on
Commit
8bed95e
1 Parent(s): f860d7c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -4
README.md CHANGED
@@ -13,7 +13,7 @@ widget:
13
  src: >-
14
  https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/resolve/main/sample5168.flac
15
  model-index:
16
- - name: wav2vec2-large-xlsr-persian-asr-shemo_me7494
17
  results:
18
  - task:
19
  name: Speech Recognition
@@ -86,7 +86,21 @@ More information needed
86
 
87
  ## Training procedure
88
 
89
- ### Training hyperparameters
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
  The following hyperparameters were used during training:
92
  - learning_rate: 1e-05
@@ -101,7 +115,7 @@ The following hyperparameters were used during training:
101
  - training_steps: 2000
102
  - mixed_precision_training: Native AMP
103
 
104
- ### Training results
105
 
106
  | Training Loss | Epoch | Step | Validation Loss | Wer |
107
  |:-------------:|:-----:|:----:|:---------------:|:------:|
@@ -127,7 +141,7 @@ The following hyperparameters were used during training:
127
  | 0.7618 | 12.5 | 2000 | 0.6728 | 0.3286 |
128
 
129
 
130
- ### Framework versions
131
 
132
  - Transformers 4.35.2
133
  - Pytorch 2.1.0+cu118
 
13
  src: >-
14
  https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/resolve/main/sample5168.flac
15
  model-index:
16
+ - name: wav2vec2-large-xlsr-persian-shemo
17
  results:
18
  - task:
19
  name: Speech Recognition
 
86
 
87
  ## Training procedure
88
 
89
+ ##### Model hyperparameters
90
+ ```python
91
+ model = Wav2Vec2ForCTC.from_pretrained(
92
+ model_name_or_path if not last_checkpoint else last_checkpoint,
93
+ # hp-mehrdad: Hyperparams of 'm3hrdadfi/wav2vec2-large-xlsr-persian-v3'
94
+ attention_dropout = 0.05316,
95
+ hidden_dropout = 0.01941,
96
+ feat_proj_dropout = 0.01249,
97
+ mask_time_prob = 0.04529,
98
+ layerdrop = 0.01377,
99
+ ctc_loss_reduction = 'mean',
100
+ ctc_zero_infinity = True,
101
+ )
102
+ ```
103
+ ##### Training hyperparameters
104
 
105
  The following hyperparameters were used during training:
106
  - learning_rate: 1e-05
 
115
  - training_steps: 2000
116
  - mixed_precision_training: Native AMP
117
 
118
+ ##### Training results
119
 
120
  | Training Loss | Epoch | Step | Validation Loss | Wer |
121
  |:-------------:|:-----:|:----:|:---------------:|:------:|
 
141
  | 0.7618 | 12.5 | 2000 | 0.6728 | 0.3286 |
142
 
143
 
144
+ ##### Framework versions
145
 
146
  - Transformers 4.35.2
147
  - Pytorch 2.1.0+cu118