drmeeseeks commited on
Commit
c367e16
1 Parent(s): 2b664dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -3
README.md CHANGED
@@ -37,18 +37,22 @@ It achieves the following results on the evaluation set:
37
 
38
  ## Model description
39
 
40
- More information needed
41
 
42
  ## Intended uses & limitations
43
 
44
- More information needed
 
 
45
 
46
  ## Training and evaluation data
47
 
48
- More information needed
49
 
50
  ## Training procedure
51
 
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
@@ -72,6 +76,20 @@ The following hyperparameters were used during training:
72
  | 0.0 | 4000.0 | 4000 | 12.2633 | 103.3422 |
73
  | 0.0 | 5000.0 | 5000 | 12.2408 | 102.9412 |
74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ### Framework versions
77
 
@@ -79,3 +97,31 @@ The following hyperparameters were used during training:
79
  - Pytorch 1.13.1+cu117
80
  - Datasets 2.8.1.dev0
81
  - Tokenizers 0.13.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  ## Model description
39
 
40
+ - The main Whisper Small Hugging Face page: [Hugging Face - Whisper Small](https://huggingface.co/openai/whisper-small)
41
 
42
  ## Intended uses & limitations
43
 
44
+ - For experimentation and curiosity.
45
+ - Based on the paper [AXRIV](https://arxiv.org/abs/2212.04356) and [Benchmarking OpenAI Whisper for non-English ASR - Dan Shafer](https://blog.deepgram.com/benchmarking-openai-whisper-for-non-english-asr/), there is a performance bias towards certain languages and curated datasets.
46
+ - From the Whisper paper, am_et is a low resource language (Table E), with the WER results ranging from 120-229, based on model size. Whisper small WER=120.2, indicating more training time may improve the fine tuning.
47
 
48
  ## Training and evaluation data
49
 
50
+ - This model was trained/evaluated on "test+validation" data from google/fleurs [google/fluers - HuggingFace Datasets](https://huggingface.co/datasets/google/fleurs).
51
 
52
  ## Training procedure
53
 
54
+ - The training was done in Lambda Cloud GPU on A100/40GB GPUs, which were provided by OpenAI Community Events [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The training was done using [HuggingFace Community Events - Whisper - run_speech_recognition_seq2seq_streaming.py](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py) using the included [whisper_python_am_et.ipynb](https://huggingface.co/drmeeseeks/whisper-small-am_et/blob/main/am_et_fine_tune_whisper_streaming_colab_RUNNING-evalerrir.ipynb) to setup the Lambda Cloud GPU/Colab environment. For Colab, you must reduce the train batch size to the recommended amount mentioned at , as the T4 GPUs have 16GB of memory [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The notebook sets up the environment, logs into your huggingface account, and generates a bash script. The bash script generated in the IPYNB, `run.sh` was run from the terminal to train `bash run.sh`, as described on the Whisper community events GITHUB page.
55
+
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
 
76
  | 0.0 | 4000.0 | 4000 | 12.2633 | 103.3422 |
77
  | 0.0 | 5000.0 | 5000 | 12.2408 | 102.9412 |
78
 
79
+ ### Recommendations
80
+
81
+ Limit training duration for smaller datasets to ~ 2000 to 3000 steps to avoid overfitting. 5000 steps using the [HuggingFace - Whisper Small](https://huggingface.co/openai/whisper-small) takes ~ 5hrs on A100 GPUs (1hr/1000 steps). Encountered `RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1` which is related to [Trainer RuntimeError](https://discuss.huggingface.co/t/trainer-runtimeerror-the-size-of-tensor-a-462-must-match-the-size-of-tensor-b-448-at-non-singleton-dimension-1/26010) as some languages datasets have input lengths that have non-standard lengths. The link did not resolve my issue, and appears elsewhere too [Training languagemodel – RuntimeError the expanded size of the tensor (100) must match the existing size (64) at non singleton dimension 1](https://hungsblog.de/en/technology/troubleshooting/training-languagemodel-runtimeerror-the-expanded-size-of-the-tensor-100-must-match-the-existing-size-64-at-non-singleton-dimension-1/). To circumvent this issue, `run.sh` paremeters are adjusted. Then run `python run_eval_whisper_streaming.py --model_id="openai/whisper-small" --dataset="google/fleurs" --config="am_et" --batch_size=32 --max_eval_samples=64 --device=0 --language="am"` to find the WER score manually. Otherwise, erroring out during evaluation prevents the trained model from loading to HugginFace. Based on the paper [AXRIV](https://arxiv.org/abs/2212.04356) and [Benchmarking OpenAI Whisper for non-English ASR - Dan Shafer](https://blog.deepgram.com/benchmarking-openai-whisper-for-non-english-asr/), there is a performance bias towards certain languages and curated datasets. The OpenAI fintuning community event provided ample _free_ GPU time to help develop the model further and improve WER scores.
82
+
83
+ ### Environmental Impact
84
+
85
+ Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). In total roughly 100 hours were used primarily in US East/Asia Pacific (80%/20%), with AWS as the reference. Additional resources are available at [Our World in Data - CO2 Emissions](https://ourworldindata.org/co2-emissions)
86
+
87
+ - __Hardware Type__: AMD EPYC 7J13 64-Core Processor (30 core VM) 197GB RAM, with NVIDIA A100-SXM 40GB
88
+ - __Hours Used__: 100 hrs
89
+ - __Cloud Provider__: Lambda Cloud GPU
90
+ - __Compute Region__: US East/Asia Pacific
91
+ - __Carbon Emitted__: 12 kg (GPU) + 13 kg (CPU) = 25 kg (the weight of 3 gallons of water)
92
+
93
 
94
  ### Framework versions
95
 
 
97
  - Pytorch 1.13.1+cu117
98
  - Datasets 2.8.1.dev0
99
  - Tokenizers 0.13.2
100
+
101
+ ### Citation
102
+
103
+ - [Whisper - GITHUB](https://github.com/openai/whisper)
104
+ - [Whisper - OpenAI - BLOG](https://openai.com/blog/whisper/)
105
+ - [Model Card - HuggingFace Hub - GITHUB](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)
106
+
107
+ ```bibtex
108
+ @misc{https://doi.org/10.48550/arxiv.2212.04356,
109
+ doi = {10.48550/ARXIV.2212.04356},
110
+ url = {https://arxiv.org/abs/2212.04356},
111
+ author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
112
+ keywords = {Audio and Speech Processing (eess.AS), Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences},
113
+ title = {Robust Speech Recognition via Large-Scale Weak Supervision},
114
+ publisher = {arXiv},
115
+ year = {2022},
116
+ copyright = {arXiv.org perpetual, non-exclusive license}
117
+ }
118
+
119
+ @article{owidco2andothergreenhousegasemissions,
120
+ author = {Hannah Ritchie and Max Roser and Pablo Rosado},
121
+ title = {CO₂ and Greenhouse Gas Emissions},
122
+ journal = {Our World in Data},
123
+ year = {2020},
124
+ note = {https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions}
125
+ }
126
+
127
+ ```