versae commited on
Commit
a76fed8
1 Parent(s): c4e12ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +234 -58
README.md CHANGED
@@ -1,84 +1,260 @@
1
  ---
 
2
  language:
3
  - 'no'
4
- license: apache-2.0
 
 
 
 
 
 
5
  tags:
6
  - audio
7
  - asr
8
  - automatic-speech-recognition
9
  - hf-asr-leaderboard
10
- model-index:
11
- - name: nb-whisper-small-publicbeta-25k
12
- results: []
 
 
13
  ---
14
 
15
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
16
- probably proofread and complete it, then remove this comment. -->
17
 
18
- # nb-whisper-small-publicbeta-25k
19
 
20
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the NbAiLab/ncc_speech2 dataset.
 
 
 
 
 
 
 
 
21
 
22
- ## Model description
23
 
24
- More information needed
25
 
26
- ## Intended uses & limitations
27
 
28
- More information needed
 
 
 
 
 
 
29
 
30
- ## Training and evaluation data
31
 
32
- More information needed
33
 
34
- ## Training procedure
35
 
36
- ### Training hyperparameters
 
 
 
 
 
37
 
38
- The following hyperparameters were used during training:
39
- - learning_rate: 5e-05
40
- - lr_scheduler_type: linear
41
- - per_device_train_batch_size: 32
42
- - total_train_batch_size_per_node: 128
43
- - total_train_batch_size: 1024
44
- - total_optimization_steps: 25,000
45
- - starting_optimization_step: None
46
- - finishing_optimization_step: 25,000
47
- - num_train_dataset_workers: 32
48
- - num_hosts: 8
49
- - total_num_training_examples: 25,600,000
50
- - steps_per_epoch: 7313
51
- - num_beams: 5
52
- - weight_decay: 0.01
53
- - adam_beta1: 0.9
54
- - adam_beta2: 0.98
55
- - adam_epsilon: 1e-06
56
- - dropout: True
57
- - bpe_dropout_probability: 0.1
58
- - activation_dropout_probability: 0.1
59
 
60
- ### Training results
61
 
62
- | step | validation_fleurs_loss | train_loss | validation_fleurs_wer | validation_fleurs_cer | validation_fleurs_exact_wer | validation_fleurs_exact_cer | validation_stortinget_loss | validation_stortinget_wer | validation_stortinget_cer | validation_stortinget_exact_wer | validation_stortinget_exact_cer |
63
- |:-----:|:----------------------:|:----------:|:---------------------:|:---------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:-------------------------:|:-------------------------:|:-------------------------------:|:-------------------------------:|
64
- | 0 | 1.2013 | 3.1115 | 218.8876 | 174.4279 | 388.7694 | 278.9901 | 1.4191 | 71.3727 | 46.4810 | 76.7531 | 49.0057 |
65
- | 1000 | 0.5627 | 1.1938 | 16.3593 | 6.2586 | 20.0717 | 7.2820 | 0.4640 | 20.7725 | 11.8840 | 24.4401 | 12.5992 |
66
- | 2000 | 0.3961 | 0.9944 | 11.7192 | 4.0146 | 15.4719 | 4.9384 | 0.3737 | 16.5674 | 10.1748 | 20.0976 | 10.8109 |
67
- | 3000 | 0.3696 | 0.9185 | 10.8269 | 4.1576 | 14.7551 | 5.1220 | 0.3426 | 14.9167 | 9.5103 | 18.3471 | 10.1061 |
68
- | 4000 | 0.3467 | 0.8298 | 9.7858 | 4.2513 | 13.6201 | 5.1558 | 0.3251 | 14.3438 | 9.2267 | 17.7666 | 9.8219 |
69
- | 5000 | 0.3266 | 0.8400 | 10.0833 | 4.2711 | 13.8889 | 5.2138 | 0.3110 | 13.9022 | 9.1039 | 17.2299 | 9.6697 |
70
- | 6000 | 0.3280 | 0.7875 | 8.7745 | 3.3636 | 12.6344 | 4.3295 | 0.3058 | 13.5598 | 8.8853 | 16.9561 | 9.4543 |
71
- | 7000 | 0.3177 | 0.7937 | 8.5961 | 3.7581 | 12.7539 | 4.6775 | 0.2991 | 13.1425 | 8.6226 | 16.4905 | 9.1878 |
72
- | 8000 | 0.3383 | 0.7872 | 8.8935 | 3.8666 | 12.9630 | 4.7934 | 0.2917 | 13.0831 | 8.6552 | 16.4486 | 9.2255 |
73
- | 9000 | 0.3320 | 0.7526 | 9.1612 | 4.0738 | 13.0526 | 5.0495 | 0.2899 | 12.8380 | 8.4996 | 16.1350 | 9.0495 |
74
- | 10000 | 0.3267 | 0.7547 | 9.5181 | 4.1280 | 13.3513 | 5.1462 | 0.2894 | 12.7106 | 8.4593 | 16.0502 | 9.0189 |
75
- | 11000 | 0.3358 | 0.7120 | 9.0125 | 4.1379 | 13.4409 | 5.1703 | 0.2889 | 12.8828 | 8.5885 | 16.1915 | 9.1459 |
76
- | 12000 | 0.3179 | 0.7387 | 9.1910 | 4.2563 | 13.5006 | 5.2331 | 0.2825 | 12.6795 | 8.4383 | 16.0152 | 8.9950 |
77
- | 13000 | 0.3152 | 0.7295 | 8.7448 | 4.0541 | 12.7539 | 4.9529 | 0.2832 | 12.5267 | 8.4567 | 15.8700 | 9.0105 |
78
 
 
79
 
80
- ### Framework versions
81
 
82
- - Transformers 4.31.0.dev0
83
- - Datasets 2.13.0
84
- - Tokenizers 0.13.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
  language:
4
  - 'no'
5
+ - nb
6
+ - nn
7
+ - en
8
+ datasets:
9
+ - NbAiLab/ncc_speech
10
+ - NbAiLab/NST
11
+ - NbAiLab/NPSC
12
  tags:
13
  - audio
14
  - asr
15
  - automatic-speech-recognition
16
  - hf-asr-leaderboard
17
+ metrics:
18
+ - wer
19
+ - cer
20
+ library_name: transformers
21
+ pipeline_tag: automatic-speech-recognition
22
  ---
23
 
24
+ # NB-Whisper small (beta)
 
25
 
26
+ This is a **_public beta_** of the Norwegian NB-Whisper. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
27
 
28
+ <center>
29
+ <figure>
30
+ <video controls>
31
+ <source src="https://huggingface.co/NbAiLab/nb-whisper-small-beta/resolve/main/king.mp4" type="video/mp4">
32
+ Your browser does not support the video tag.
33
+ </video>
34
+ <figcaption><a href="https://www.royalcourt.no/tale.html?tid=137662&sek=28409&scope=27248" target="_blank">Speech given by His Majesty The King of Norway at the garden party hosted by Their Majesties The King and Queen at the Palace Park on 1 September 2016.</a></figcaption>
35
+ </figure>
36
+ </center>
37
 
 
38
 
39
+ ## Model Details
40
 
41
+ NB-Whisper models are available in five different sizes (the table has links to the other sizes with semi-identical model cards):
42
 
43
+ | Model Size | Parameters | Availability |
44
+ |------------|------------|--------------|
45
+ | tiny | 39M | _Will be released in public beta later this summer_ |
46
+ | base | 74M | _Will be released in public beta later this summer_ |
47
+ | small | 244M | This model, available in public beta |
48
+ | medium | 769M | _Will be released in public beta later this summer_ |
49
+ | large | 1550M | _Will be released in public beta later this summer_ |
50
 
51
+ An official release of NB-Whisper models is planned for the Fall 2023.
52
 
53
+ Please refer to the OpenAI Whisper model card for more details about the backbone model.
54
 
55
+ ### Model Description
56
 
57
+ - **Developed by:** [NB AI-Lab](https://ai.nb.no/)
58
+ - **Shared by:** [NB AI-Lab](https://ai.nb.no/)
59
+ - **Model type:** `whisper`
60
+ - **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
61
+ - **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
62
+ - **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
63
 
64
+ ### Model Sources
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
+ <!-- Provide the basic links for the model. -->
67
 
68
+ - **Repository:** https://github.com/NbAiLab/nb-whisper/
69
+ - **Paper:** _Coming soon_
70
+ - **Demo:** _Coming soon_
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
+ ## Uses
73
 
74
+ ### Direct Use
75
 
76
+ This is a **_public beta_** release. The models published in this repository are intended for a generalist purpose and are available to third parties.
77
+
78
+
79
+ ### Downstream Use
80
+
81
+ We are confident that NB-Whisper will give better results than the multilingual OpenAI Whisper if the target is Norwegian. However, it is still known to show some hallucinations, as well as a tendency to drop part of the transcript from time to time. Please also note that the transcripts are typically not word by word. Spoken language and written language are often very different, and the model aims to "translate" spoken utterances into grammatically correct written sentences. We strongly believe that the best way to understand these models is to try them yourself.
82
+
83
+ A significant part of the training material comes from TV subtitles. Subtitles often shorten sentences to make them more readable. Typically, non-essential parts of the utterance can be also dropped. In some cases, this is a desired ability, in other cases, this is undesired. The final release of these model will provida a mechanism to control for this beaviour.
84
+
85
+ ### Out-of-Scope Use
86
+
87
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
88
+
89
+ ## Bias, Risks, and Limitations
90
+
91
+ These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
92
+
93
+
94
+ ### Recommendations
95
+
96
+ We recommend users of NB-Whisper models to consider finetuning them for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
97
+
98
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
99
+
100
+ ## How to Get Started with the Model
101
+
102
+ Use the code below to get started with the model.
103
+
104
+ ```python
105
+ from transformers import pipeline
106
+
107
+ asr = pipeline(
108
+ "automatic-speech-recognition",
109
+ "NbAiLab/nb-whisper-small-beta"
110
+ )
111
+ asr(
112
+ "audio.mp3",
113
+ generate_kwargs={'task': 'transcribe', 'language': 'no'}
114
+ )
115
+ # {'text': ' Så mange anga kører seg i så viktig sak, så vi får du kører det tilbake med. Om kabaret gudam i at vi skal hjælge. Kør seg vi gjør en uda? Nei noe skal å abelistera sonvorne skrifer. Det er sak, så kjent det bare handling i samtatsen til bargører. Trudet første lask. På den å først så å køre og en gange samme, og så får vi gjør å vorte vorte vorte når vi kjent dit.'}
116
+ ```
117
+
118
+ Timestamps can also be retrieved by passing in the right parameter.
119
+
120
+ ```python
121
+ asr(
122
+ "audio.mp3",
123
+ generate_kwargs={'task': 'transcribe', 'language': 'no'},
124
+ return_timestamps=True,
125
+ )
126
+ # {'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om hva valget dem gjør at vi skal gjøre. Hva skjer vi gjøre nå da? Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget, tror det første
127
+ # r. Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.',
128
+ # 'chunks': [{'timestamp': (0.0, 5.34),
129
+ # 'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om'},
130
+ # {'timestamp': (5.34, 8.64),
131
+ # 'text': ' hva valget dem gjør at vi skal gjøre.'},
132
+ # {'timestamp': (8.64, 10.64), 'text': ' Hva skjer vi gjøre nå da?'},
133
+ # {'timestamp': (10.64, 17.44),
134
+ # 'text': ' Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget,'},
135
+ # {'timestamp': (17.44, 19.44), 'text': ' tror det første år.'},
136
+ # {'timestamp': (19.44, 23.94),
137
+ # 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]}
138
+ ```
139
+
140
+
141
+ ## Training Details
142
+
143
+ ### Training Data
144
+
145
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
146
+
147
+ Trained data comes from Språkbanken and the digital collection at the National Library of Norway. Training data includes:
148
+
149
+ - [NST Norwegian ASR Database (16 kHz)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-54/), and its corresponding [dataset](https://huggingface.co/datasets/NbAiLab/NST)
150
+ - Transcribed speeches from the Norwegian Parliament produced by Språkbanken
151
+ - TV broadcast (NRK) subtitles (NLN digital collection)
152
+ - Audiobooks (NLN digital collection)
153
+
154
+
155
+ ### Training Procedure
156
+
157
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
158
+
159
+ #### Preprocessing [optional]
160
+
161
+ [More Information Needed]
162
+
163
+
164
+ #### Training Hyperparameters
165
+
166
+ - **Training regime:** bf16 mixed precision
167
+
168
+ #### Speeds, Sizes, Times [optional]
169
+
170
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
171
+
172
+ [More Information Needed]
173
+
174
+ ## Evaluation
175
+
176
+ <!-- This section describes the evaluation protocols and provides the results. -->
177
+
178
+ ### Testing Data, Factors & Metrics
179
+
180
+ #### Testing Data
181
+
182
+ <!-- This should link to a Data Card if possible. -->
183
+
184
+ [More Information Needed]
185
+
186
+ #### Factors
187
+
188
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
189
+
190
+ [More Information Needed]
191
+
192
+ #### Metrics
193
+
194
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
195
+
196
+ [More Information Needed]
197
+
198
+ ### Results
199
+
200
+ [More Information Needed]
201
+
202
+ #### Summary
203
+
204
+
205
+
206
+ ## Model Examination [optional]
207
+
208
+ <!-- Relevant interpretability work for the model goes here -->
209
+
210
+ [More Information Needed]
211
+
212
+ ## Environmental Impact
213
+
214
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
215
+
216
+ Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
217
+
218
+ - **Hardware Type:** TPUv4
219
+ - **Hours used:** 1,536
220
+ - **Cloud Provider:** Google Cloud
221
+ - **Compute Region:** `us-central1`
222
+ - **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider.
223
+
224
+ ## Technical Specifications [optional]
225
+
226
+ ### Model Architecture and Objective
227
+
228
+ [More Information Needed]
229
+
230
+ ### Compute Infrastructure
231
+
232
+ [More Information Needed]
233
+
234
+ #### Hardware
235
+
236
+ [More Information Needed]
237
+
238
+ #### Software
239
+
240
+ [More Information Needed]
241
+
242
+ ## Citation
243
+
244
+ _A paper is coming soon!_
245
+
246
+ <!-- **BibTeX:**
247
+
248
+ [More Information Needed]
249
+
250
+ **APA:**
251
+
252
+ [More Information Needed] -->
253
+
254
+ ## Acknowledgements
255
+
256
+ Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs.
257
+
258
+ ## Contact
259
+
260
+ <a rel="noopener nofollow" href="mailto:ailab@nb.no">ailab@nb.no</a>