Whisper-Small-En: Optimized for Mobile Deployment
Automatic speech recognition (ASR) model for English transcription as well as translation
OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below.
This model is an implementation of Whisper-Small-En found here.
This repository provides scripts to run Whisper-Small-En on Qualcomm® devices. More details on model performance across various devices, can be found here.
Model Details
- Model Type: Speech recognition
- Model Stats:
- Model checkpoint: small.en
- Input resolution: 80x3000 (30 seconds audio)
- Mean decoded sequence length: 112 tokens
- Number of parameters (WhisperEncoder): 102M
- Model size (WhisperEncoder): 390 MB
- Number of parameters (WhisperDecoder): 139M
- Model size (WhisperDecoder): 531 MB
Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
---|---|---|---|---|---|---|---|---|
WhisperDecoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 29.126 ms | 16 - 96 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 11.961 ms | 54 - 137 MB | FP16 | NPU | Whisper-Small-En.so |
WhisperDecoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 61.425 ms | 154 - 199 MB | FP16 | NPU | Whisper-Small-En.onnx |
WhisperDecoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 23.441 ms | 16 - 150 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 9.604 ms | 53 - 160 MB | FP16 | NPU | Whisper-Small-En.so |
WhisperDecoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 51.698 ms | 16 - 327 MB | FP16 | NPU | Whisper-Small-En.onnx |
WhisperDecoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 18.162 ms | 16 - 176 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 7.555 ms | 49 - 184 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 42.696 ms | 86 - 408 MB | FP16 | NPU | Whisper-Small-En.onnx |
WhisperDecoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 29.309 ms | 16 - 101 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 12.213 ms | 61 - 63 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | SA7255P ADP | SA7255P | TFLITE | 100.26 ms | 16 - 175 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | SA7255P ADP | SA7255P | QNN | 74.87 ms | 60 - 70 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 29.902 ms | 16 - 101 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 12.096 ms | 54 - 55 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | SA8295P ADP | SA8295P | TFLITE | 31.128 ms | 16 - 164 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | SA8295P ADP | SA8295P | QNN | 14.544 ms | 57 - 71 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 29.982 ms | 14 - 97 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 12.14 ms | 57 - 59 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | SA8775P ADP | SA8775P | TFLITE | 33.024 ms | 16 - 175 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | SA8775P ADP | SA8775P | QNN | 14.735 ms | 57 - 66 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 33.055 ms | 16 - 140 MB | FP16 | NPU | Whisper-Small-En.tflite |
WhisperDecoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 16.795 ms | 53 - 172 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 10.56 ms | 61 - 61 MB | FP16 | NPU | Use Export Script |
WhisperDecoder | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 52.337 ms | 232 - 232 MB | FP16 | NPU | Whisper-Small-En.onnx |
WhisperEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 807.519 ms | 79 - 160 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 804.747 ms | 0 - 211 MB | FP16 | NPU | Whisper-Small-En.so |
WhisperEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 602.309 ms | 110 - 200 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 597.586 ms | 0 - 837 MB | FP16 | NPU | Whisper-Small-En.so |
WhisperEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 869.225 ms | 0 - 1429 MB | FP16 | NPU | Whisper-Small-En.onnx |
WhisperEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 544.489 ms | 111 - 141 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 515.742 ms | 0 - 906 MB | FP16 | NPU | Use Export Script |
WhisperEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 677.156 ms | 172 - 1609 MB | FP16 | NPU | Whisper-Small-En.onnx |
WhisperEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 1255.513 ms | 18 - 221 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 675.441 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
WhisperEncoder | SA7255P ADP | SA7255P | TFLITE | 4429.057 ms | 109 - 142 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | SA7255P ADP | SA7255P | QNN | 3217.361 ms | 1 - 11 MB | FP16 | NPU | Use Export Script |
WhisperEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 685.501 ms | 110 - 158 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 687.338 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
WhisperEncoder | SA8295P ADP | SA8295P | TFLITE | 657.369 ms | 110 - 142 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | SA8295P ADP | SA8295P | QNN | 700.793 ms | 0 - 15 MB | FP16 | NPU | Use Export Script |
WhisperEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 686.08 ms | 50 - 129 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 674.708 ms | 0 - 3 MB | FP16 | NPU | Use Export Script |
WhisperEncoder | SA8775P ADP | SA8775P | TFLITE | 1287.541 ms | 88 - 121 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | SA8775P ADP | SA8775P | QNN | 604.581 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
WhisperEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 983.989 ms | 58 - 157 MB | FP16 | GPU | Whisper-Small-En.tflite |
WhisperEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 505.395 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
WhisperEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1340.942 ms | 237 - 237 MB | FP16 | NPU | Whisper-Small-En.onnx |
Installation
This model can be installed as a Python package via pip.
pip install "qai-hub-models[whisper_small_en]"
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to Qualcomm® AI Hub with your
Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token
.
With this API token, you can configure your client to run models on the cloud hosted devices.
qai-hub configure --api_token API_TOKEN
Navigate to docs for more information.
Demo off target
The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.
python -m qai_hub_models.models.whisper_small_en.demo
The above demo runs a reference implementation of pre-processing, model inference, and post processing.
NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).
%run -m qai_hub_models.models.whisper_small_en.demo
Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:
- Performance check on-device on a cloud-hosted device
- Downloads compiled assets that can be deployed on-device for Android.
- Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.whisper_small_en.export
Profiling Results
------------------------------------------------------------
WhisperDecoder
Device : Samsung Galaxy S23 (13)
Runtime : TFLITE
Estimated inference time (ms) : 29.1
Estimated peak memory usage (MB): [16, 96]
Total # Ops : 2573
Compute Unit(s) : NPU (2573 ops)
------------------------------------------------------------
WhisperEncoder
Device : Samsung Galaxy S23 (13)
Runtime : TFLITE
Estimated inference time (ms) : 807.5
Estimated peak memory usage (MB): [79, 160]
Total # Ops : 911
Compute Unit(s) : GPU (900 ops) CPU (11 ops)
How does this work?
This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:
Step 1: Compile model for on-device deployment
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the jit.trace
and then call the submit_compile_job
API.
import torch
import qai_hub as hub
from qai_hub_models.models.whisper_small_en import Model
# Load the model
model = Model.from_pretrained()
decoder_model = model.decoder
encoder_model = model.encoder
# Device
device = hub.Device("Samsung Galaxy S23")
# Trace model
decoder_input_shape = decoder_model.get_input_spec()
decoder_sample_inputs = decoder_model.sample_inputs()
traced_decoder_model = torch.jit.trace(decoder_model, [torch.tensor(data[0]) for _, data in decoder_sample_inputs.items()])
# Compile model on a specific device
decoder_compile_job = hub.submit_compile_job(
model=traced_decoder_model ,
device=device,
input_specs=decoder_model.get_input_spec(),
)
# Get target model to run on-device
decoder_target_model = decoder_compile_job.get_target_model()
# Trace model
encoder_input_shape = encoder_model.get_input_spec()
encoder_sample_inputs = encoder_model.sample_inputs()
traced_encoder_model = torch.jit.trace(encoder_model, [torch.tensor(data[0]) for _, data in encoder_sample_inputs.items()])
# Compile model on a specific device
encoder_compile_job = hub.submit_compile_job(
model=traced_encoder_model ,
device=device,
input_specs=encoder_model.get_input_spec(),
)
# Get target model to run on-device
encoder_target_model = encoder_compile_job.get_target_model()
Step 2: Performance profiling on cloud-hosted device
After compiling models from step 1. Models can be profiled model on-device using the
target_model
. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
decoder_profile_job = hub.submit_profile_job(
model=decoder_target_model,
device=device,
)
encoder_profile_job = hub.submit_profile_job(
model=encoder_target_model,
device=device,
)
Step 3: Verify on-device accuracy
To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.
decoder_input_data = decoder_model.sample_inputs()
decoder_inference_job = hub.submit_inference_job(
model=decoder_target_model,
device=device,
inputs=decoder_input_data,
)
decoder_inference_job.download_output_data()
encoder_input_data = encoder_model.sample_inputs()
encoder_inference_job = hub.submit_inference_job(
model=encoder_target_model,
device=device,
inputs=encoder_input_data,
)
encoder_inference_job.download_output_data()
With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.
Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.
Deploying compiled model to Android
The models can be deployed using multiple runtimes:
TensorFlow Lite (
.tflite
export): This tutorial provides a guide to deploy the .tflite model in an Android application.QNN (
.so
export ): This sample app provides instructions on how to use the.so
shared library in an Android application.
View on Qualcomm® AI Hub
Get more details on Whisper-Small-En's performance across various devices here. Explore all available models on Qualcomm® AI Hub
License
- The license for the original implementation of Whisper-Small-En can be found here.
- The license for the compiled assets for on-device deployment can be found here
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.