Paraformer-large / README.md
langgz's picture
Update README.md
30f2c37
|
raw
history blame
3.02 kB
metadata
license: apache-2.0
language:
  - zh
metrics:
  - accuracy
  - cer
pipeline_tag: automatic-speech-recognition
tags:
  - Paraformer
  - FunASR
  - ASR

Introduce

Paraformer is a non-autoregressive end-to-end speech recognition model. Compared to the currently mainstream autoregressive models, non-autoregressive models can output the target text for the entire sentence in parallel, making them particularly suitable for parallel inference using GPUs. Paraformer is currently the first known non-autoregressive model that can achieve the same performance as autoregressive end-to-end models on industrial-scale data. When combined with GPU inference, it can improve inference efficiency by 10 times, thereby reducing machine costs for speech recognition cloud services by nearly 10 times.

This repo shows how to use Paraformer with funasr_onnx runtime, the model comes from FunASR, which trained from 60000 hours Mandarin data. The performance of Paraformer obtained the first place in SpeechIO Leadboard.

We have released a large number of industrial-level models, including speech recognition, voice activaty detection, punctuation restoration, speaker verification, speaker diarizatio and timestamp prediction(force alignment). If you are interest, please ref to FunASR.

Install funasr_onnx

pip install -U funasr_onnx
# For the users in China, you could install with the command:
# pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple

Download the model

git clone https://huggingface.co/funasr/paraformer-large

Inference with runtime

Speech Recognition

Paraformer

from funasr_onnx import Paraformer

model_dir = "./export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
model = Paraformer(model_dir, batch_size=1, quantize=True)

wav_path = ['./export/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav']

result = model(wav_path)
print(result)
  • model_dir: the model path, which contains model.onnx, config.yaml, am.mvn
  • batch_size: 1 (Default), the batch size duration inference
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: wav formt file, support formats: str, np.ndarray, List[str]

Output: List[str]: recognition result

Performance benchmark

Please ref to benchmark