nithinraok
commited on
Commit
•
7239fb0
1
Parent(s):
2908ba3
Update README.md
Browse files
README.md
CHANGED
@@ -233,7 +233,7 @@ This model uses a Hybrid FastConformer-TDT-CTC architecture. FastConformer [1] i
|
|
233 |
|
234 |
## Training
|
235 |
|
236 |
-
The NeMo toolkit [3] was used for finetuning this model for 20,000 steps over `parakeet-tdt-1.1` model. This model is trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/
|
237 |
|
238 |
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
|
239 |
|
|
|
233 |
|
234 |
## Training
|
235 |
|
236 |
+
The NeMo toolkit [3] was used for finetuning this model for 20,000 steps over `parakeet-tdt-1.1` model. This model is trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_hybrid_transducer_ctc/speech_to_text_hybrid_rnnt_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/hybrid_transducer_ctc/fastconformer_hybrid_transducer_ctc_bpe.yaml).
|
237 |
|
238 |
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
|
239 |
|