Edit model card

Whisper small model for CTranslate2

This repository contains the conversion of fine-tunining Tiny version of Whisper to the CTranslate2 model format.

This model can be used in CTranslate2 or projects based on CTranslate2 such as faster-whisper.

Install faster-whisper

pip install git+https://github.com/guillaumekln/faster-whisper.git

Example

from faster_whisper import WhisperModel

model = WhisperModel("bl4dylion/faster-whisper-tiny-belarusian")

segments, info = model.transcribe("audio.mp3")
for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

Conversion details

The original model was converted with the following command:

ct2-transformers-converter --model <model_path> --output_dir faster-whisper-tiny-belarusian \
    --copy_files tokenizer_config.json --quantization float16

Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the compute_type option in CTranslate2.

Downloads last month
5
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.