Whisper large-v3 model for CTranslate2

This repository contains the conversion of whisper-turbo to the CTranslate2 model format.

Example

from huggingface_hub import snapshot_download
from faster_whisper import WhisperModel

repo_id = "jootanehorror/faster-whisper-large-v3-turbo-ct2"
local_dir = "faster-whisper-large-v3-turbo-ct2"
snapshot_download(repo_id=repo_id, local_dir=local_dir, repo_type="model")

model = WhisperModel(local_dir, device='cpu', compute_type='int8')

segments, info = model.transcribe("sample.mp3")

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

More information

For more information about the model, see its official github page.

Downloads last month
27
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support automatic-speech-recognition models for ctranslate2 library.