Salama1429 commited on
Commit
1ab7485
1 Parent(s): 61b1321

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -45,26 +45,30 @@ It achieves the following results on the evaluation set:
45
  - Loss: 0.5362
46
  - Wer: 58.5848
47
 
48
- ## Model description
 
 
49
 
50
- More information needed
51
 
 
 
52
  ## Intended uses & limitations
53
 
54
- More information needed
55
 
56
  ## Training and evaluation data
57
-
58
  Common_Voice_Arabic_12.0 and I made some augmentations to it as follows:
59
  - 25% of the data TimeMasking
60
  - 25% of the data SpecAugmentation
61
  - 25% of the data WavAugmentation (AddGaussianNoise)
62
  - The final dataset is the original common voice plus the augmented files
63
-
64
  ## Training procedure
65
 
66
  ### Training hyperparameters
67
-
68
  The following hyperparameters were used during training:
69
  - learning_rate: 1e-05
70
  - train_batch_size: 64
@@ -75,7 +79,7 @@ The following hyperparameters were used during training:
75
  - lr_scheduler_warmup_steps: 500
76
  - num_epochs: 25
77
  - mixed_precision_training: Native AMP
78
-
79
  ### Training results
80
 
81
  | Training Loss | Epoch | Step | Validation Loss | Wer |
 
45
  - Loss: 0.5362
46
  - Wer: 58.5848
47
 
48
+ ## Example of usage:
49
+ ```
50
+ from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
51
 
52
+ processor = AutoProcessor.from_pretrained("Salama1429/KalemaTech-Arabic-STT-ASR-based-on-Whisper-Small")
53
 
54
+ model = AutoModelForSpeechSeq2Seq.from_pretrained("Salama1429/KalemaTech-Arabic-STT-ASR-based-on-Whisper-Small")
55
+ ```
56
  ## Intended uses & limitations
57
 
58
+ Automatic Speech Recognition
59
 
60
  ## Training and evaluation data
61
+ ```
62
  Common_Voice_Arabic_12.0 and I made some augmentations to it as follows:
63
  - 25% of the data TimeMasking
64
  - 25% of the data SpecAugmentation
65
  - 25% of the data WavAugmentation (AddGaussianNoise)
66
  - The final dataset is the original common voice plus the augmented files
67
+ ```
68
  ## Training procedure
69
 
70
  ### Training hyperparameters
71
+ ```
72
  The following hyperparameters were used during training:
73
  - learning_rate: 1e-05
74
  - train_batch_size: 64
 
79
  - lr_scheduler_warmup_steps: 500
80
  - num_epochs: 25
81
  - mixed_precision_training: Native AMP
82
+ ```
83
  ### Training results
84
 
85
  | Training Loss | Epoch | Step | Validation Loss | Wer |