pere commited on
Commit
6968170
1 Parent(s): 6ebda13

Update description.md

Browse files
Files changed (1) hide show
  1. description.md +5 -3
description.md CHANGED
@@ -1,5 +1,5 @@
1
- # Norwegian Wav2Vec Models
2
- This is one of several Wav2Vec-models created under the HuggingFace hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). In parallell with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://huggingface.co/datasets/NbAiLab/NPSC) to the HuggingFace Dataset format and used that as the main source for training.
3
 
4
  We do release all code developed during the event so that the Norwegian NLP community can build upon this to develop even better Norwegian ASR models. The finetuning of these models are not very compute demanding. You should after following the instructions here, be able to train your own automatic speech recognition system in less than a day with an average GPU.
5
 
@@ -9,6 +9,8 @@ We strongly recommend that you follow the [instructions from HuggingFace](https:
9
  When you have verified that you are able to do this, create a new repo. You can then start by copying the files **run.sh** and **run_speech_recognition_ctc.py** from our repo. You should be able to reproduce our results by just running this script. With some tweaking, you will most likely be able to build an even better ASR.
10
 
11
  ### Adding a language model
12
- HuggingFace has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the 5-gram model
 
 
13
 
14
 
 
1
+ # Norwegian Wav2Vec2 Models
2
+ This is one of several Wav2Vec-models created during the HuggingFace hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). In parallell with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://huggingface.co/datasets/NbAiLab/NPSC) to the HuggingFace Dataset format and used that as the main source for training.
3
 
4
  We do release all code developed during the event so that the Norwegian NLP community can build upon this to develop even better Norwegian ASR models. The finetuning of these models are not very compute demanding. You should after following the instructions here, be able to train your own automatic speech recognition system in less than a day with an average GPU.
5
 
 
9
  When you have verified that you are able to do this, create a new repo. You can then start by copying the files **run.sh** and **run_speech_recognition_ctc.py** from our repo. You should be able to reproduce our results by just running this script. With some tweaking, you will most likely be able to build an even better ASR.
10
 
11
  ### Adding a language model
12
+ HuggingFace has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
13
+
14
+
15
 
16