File size: 464 Bytes
529a432
 
 
 
 
 
 
7951da4
0a16267
f220a5e
 
4193b09
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
language: da
tags:
- speech
license: apache-2.0
---

# Wav2vec2-base for Danish

This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we are allowed to distribute the pretrained model.

The pre-training was done using the fairseq library in January 2021.

It needs to be fine-tuned in order to perform speech recognition.