File size: 10,436 Bytes
59ac979
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5229ab1
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
5229ab1
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
5229ab1
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
5229ab1
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
5229ab1
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
5229ab1
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad08092
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad08092
 
2a7d010
 
 
 
 
ad08092
 
2a7d010
 
 
 
 
 
 
 
 
 
 
 
 
 
ab74426
2a7d010
 
 
 
 
 
 
 
 
 
 
5229ab1
 
 
2a7d010
 
 
 
 
ad08092
2a7d010
 
 
 
4158143
 
5229ab1
2a7d010
 
 
 
5229ab1
 
 
2a7d010
5229ab1
2a7d010
 
 
 
 
 
ad08092
 
2a7d010
ad08092
 
 
2a7d010
ad08092
 
2a7d010
 
ab74426
2a7d010
 
 
 
 
 
 
ad08092
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
---
language:
- it
library_name: nemo
datasets:
- facebook/voxpopuli
- facebook/multilingual_librispeech
- mozilla-foundation/common_voice_12_0
thumbnail: null
tags:
  - automatic-speech-recognition
  - speech
  - audio
  - Transducer
  - FastConformer
  - CTC
  - Transformer
  - pytorch
  - NeMo
  - hf-asr-leaderboard
license: cc-by-4.0
model-index:
- name: stt_it_fastconformer_hybrid_large_pc
  results:
  - task:
      type: Automatic Speech Recognition
      name: speech-recognition
    dataset:
      name: common-voice-12-0
      type: mozilla-foundation/common_voice_12_0
      config: it
      split: test
      args:
        language: it
    metrics:
    - name: Test WER
      type: wer
      value: 5.64
  - task:
      type: Automatic Speech Recognition
      name: automatic-speech-recognition
    dataset:
      name: Multilingual LibriSpeech
      type: facebook/multilingual_librispeech
      config: italian
      split: test
      args:
        language: it
    metrics:
    - name: Test WER
      type: wer
      value: 11.39
  - task:
      type: Automatic Speech Recognition
      name: speech-recognition
    dataset:
      name: VoxPopuli
      type: facebook/voxpopuli
      config: it
      split: test
      args:
        language: it
    metrics:
    - name: Test WER
      type: wer
      value: 16.22
  - task:
      type: Automatic Speech Recognition
      name: speech-recognition
    dataset:
      name: common-voice-12-0
      type: mozilla-foundation/common_voice_12_0
      config: it
      split: test
      args:
        language: it
    metrics:
    - name: Test WER P&C
      type: wer
      value: 8.11
  - task:
      type: Automatic Speech Recognition
      name: automatic-speech-recognition
    dataset:
      name: Multilingual LibriSpeech
      type: facebook/multilingual_librispeech
      config: italian
      split: test
      args:
        language: it
    metrics:
    - name: Test WER P&C
      type: wer
      value: 18.27
  - task:
      type: Automatic Speech Recognition
      name: speech-recognition
    dataset:
      name: VoxPopuli
      type: facebook/voxpopuli
      config: it
      split: test
      args:
        language: it
    metrics:
    - name: Test WER P&C
      type: wer
      value: 19.97
---
# NVIDIA FastConformer-Hybrid Large (it)

<style>
img {
 display: inline;
}
</style>

| [![Model architecture](https://img.shields.io/badge/Model_Arch-FastConformer--Transducer_CTC-lightgrey#model-badge)](#model-architecture)
| [![Model size](https://img.shields.io/badge/Params-115M-lightgrey#model-badge)](#model-architecture)
| [![Language](https://img.shields.io/badge/Language-it-lightgrey#model-badge)](#datasets)


This model transcribes speech in upper and lower case Italian alphabet along with spaces, periods, commas, and question marks.
It is a "large" version of FastConformer Transducer-CTC (around 115M parameters) model. This is a hybrid model trained on two losses: Transducer (default) and CTC.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details.

## NVIDIA NeMo: Training

To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```

## How to Use this Model

The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

### Automatically instantiate the model

```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="nvidia/stt_it_fastconformer_hybrid_large_pc")
```

### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```

### Transcribing many audio files

Using Transducer mode inference:
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
 pretrained_name="nvidia/stt_it_fastconformer_hybrid_large_pc"
 audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```

Using CTC mode inference:
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
 pretrained_name="nvidia/stt_it_fastconformer_hybrid_large_pc"
 audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
 decoder_type="ctc"
```

### Input

This model accepts 16000 Hz Mono-channel Audio (wav files) as input.

### Output

This model provides transcribed speech as a string for a given audio sample.

## Model Architecture

FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).

## Training

The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_hybrid_transducer_ctc/speech_to_text_hybrid_rnnt_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/hybrid_transducer_ctc/fastconformer_hybrid_transducer_ctc_bpe.yaml).

The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).

### Datasets

The model in this collection are trained on a composite dataset (NeMo PnC IT ASRSET) comprising of 487 hours of Italian speech:

- Mozilla Common Voice 12.0 (Italian) - 220 hours after data cleaning. [Speech Data Processor](https://github.com/NVIDIA/NeMo-speech-data-processor) config used to prepare this data is [here](https://github.com/NVIDIA/NeMo-speech-data-processor/blob/main/dataset_configs/italian/mcv/config.yaml).
- Multilingual LibriSpeech (Italian) - 214 hours after data cleaning. [Speech Data Processor](https://github.com/NVIDIA/NeMo-speech-data-processor) config used to prepare this data is [here](https://github.com/NVIDIA/NeMo-speech-data-processor/blob/main/dataset_configs/italian/mls/config.yaml).
- VoxPopuli transcribed subset (Italian) - 53 hours after data cleaning. [Speech Data Processor](https://github.com/NVIDIA/NeMo-speech-data-processor) config used to prepare this data is [here](https://github.com/NVIDIA/NeMo-speech-data-processor/blob/main/dataset_configs/italian/voxpopuli/config.yaml).

## Performance

The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.

The following tables summarizes the performance of the available models in this collection with the Transducer decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.


a) On data without Punctuation and Capitalization

| Version | Tokenizer             | Vocabulary Size | MCV 12.0 Dev | MCV 12.0 Test | MLS Dev | MLS Test | VoxPopuli Dev | VoxPopuli Test |
|---------|-----------------------|-----------------|--------------|---------------|---------|----------|---------------|----------------|
| 1.20.0  | SentencePiece BPE     | 512             | 5.19%        | 5.64%         | 13.01%  | 11.39%   | 13.02%        | 16.22%         |


b) On data with Punctuation and Capitalization

| Version | Tokenizer             | Vocabulary Size | MCV 12.0 Dev | MCV 12.0 Test | MLS Dev\* | MLS Test\* | VoxPopuli Dev | VoxPopuli Test |
|---------|-----------------------|-----------------|--------------|---------------|-----------|------------|---------------|----------------|
| 1.20.0  | SentencePiece BPE     | 512             | 7.70%        | 8.11%         | 21.69%    | 18.27%     | 16.96%        | 19.97%         |

\* We use only a subset of dev/test sets with P&C restored from the original books

## Limitations
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. The model only outputs the punctuations: ```'.', ',', '?' ``` and hence might not do well in scenarios where other punctuations are also expected.

## NVIDIA Riva: Deployment

[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:

* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support

Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).

## References
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)

[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)

[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)

## Licence

License to use this model is covered by the [CC-BY-4 License](https://creativecommons.org/licenses/by/4.0/legalcode) unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4 License](https://creativecommons.org/licenses/by/4.0/legalcode).