File size: 3,135 Bytes
992e5e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
language: de
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: wav2vec2-xls-r-1b-5gram-german with LM by Florian Zimmermeister
  results:
  - task: 
      name: Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: Common Voice de
      type: common_voice
      args: de
    metrics:
       - name: Test WER
         type: wer
         value: 4.382541642219636
       - name: Test CER
         type: cer
         value: 1.6235493024026488
---
**Test Result**

| Model | WER | CER |
| ------------- | ------------- | ------------- |
| flozi00/wav2vec2-large-xlsr-53-german-with-lm | **4.382541642219636%** | **1.6235493024026488%** |

## Evaluation
The model can be evaluated as follows on the German test data of Common Voice.

```python
import torch
from transformers import AutoModelForCTC, AutoProcessor
from unidecode import unidecode
import re
from datasets import load_dataset, load_metric
import datasets

counter = 0
wer_counter = 0
cer_counter = 0
device = "cuda" if torch.cuda.is_available() else "cpu"


special_chars = [["Ä"," AE "], ["Ö"," OE "], ["Ü"," UE "], ["ä"," ae "], ["ö"," oe "], ["ü"," ue "]]
def clean_text(sentence):
    for special in special_chars:
        sentence = sentence.replace(special[0], special[1])

    sentence = unidecode(sentence)

    for special in special_chars:
        sentence = sentence.replace(special[1], special[0])

    sentence = re.sub("[^a-zA-Z0-9öäüÖÄÜ ,.!?]", " ", sentence)

    return sentence

def main(model_id):
    print("load model")
    model = AutoModelForCTC.from_pretrained(model_id).to(device)
    print("load processor")
    processor = AutoProcessor.from_pretrained(processor_id)

    print("load metrics")
    wer = load_metric("wer")
    cer = load_metric("cer")

    ds = load_dataset("mozilla-foundation/common_voice_8_0","de")
    ds = ds["test"]

    ds = ds.cast_column(
        "audio", datasets.features.Audio(sampling_rate=16_000)
    )

    def calculate_metrics(batch):
        global counter, wer_counter, cer_counter
        resampled_audio = batch["audio"]["array"]

        input_values = processor(resampled_audio, return_tensors="pt", sampling_rate=16_000).input_values

        with torch.no_grad():
            logits = model(input_values.to(device)).logits.cpu().numpy()[0]


        decoded = processor.decode(logits)
        pred = decoded.text.lower()

        ref = clean_text(batch["sentence"]).lower()

        wer_result = wer.compute(predictions=[pred], references=[ref])
        cer_result = cer.compute(predictions=[pred], references=[ref])

        counter += 1
        wer_counter += wer_result
        cer_counter += cer_result

        if counter % 100 == True:
            print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")

        return batch


    ds.map(calculate_metrics, remove_columns=ds.column_names)
    print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")

model_id = "flozi00/wav2vec2-xls-r-1b-5gram-german"
main(model_id)
```