Automatic Speech Recognition
MLX
German
whisper
Eval Results

whisper-large-v3-turbo-german-f16-q4

This model was converted to MLX format from primeline/whisper-large-v3-turbo-german and is quantized to 4bit, float16.

made with a custom script for converting safetensor whisper models.

there is also an unquantized float16 version

Use with MLX

git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper/
pip install -r requirements.txt
import mlx_whisper
result = mlx_whisper.transcribe("test.mp3", path_or_hf_repo="mlx-community/whisper-large-v3-turbo-german-f16")
print(result)

whisper-large-v3-turbo-german-f16-q4

This model was converted to MLX format.

Use with MLX

git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper/
pip install -r requirements.txt

# Example usage
import mlx_whisper
result = mlx_whisper.transcribe("test.mp3", path_or_hf_repo="whisper-large-v3-turbo-german-f16-q4")
print(result)
Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support mlx models with pipeline type automatic-speech-recognition

Model tree for mlx-community/whisper-large-v3-turbo-german-f16-q4

Finetuned
(7)
this model

Datasets used to train mlx-community/whisper-large-v3-turbo-german-f16-q4

Evaluation results