Edit model card

Bielik-11B-v2.2-Instruct-MLX-8bit

This model was converted to MLX format from SpeakLeash's Bielik-11B-v.2.2-Instruct.

DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("speakleash/Bielik-11B-v2.2-Instruct-MLX-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)

Model description:

Responsible for model quantization

  • Remigiusz KinasSpeakLeash - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery.

Contact Us

If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our Discord SpeakLeash.

Downloads last month
59
Safetensors
Model size
3.24B params
Tensor type
FP16
·
U32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for speakleash/Bielik-11B-v2.2-Instruct-MLX-8bit

Finetuned
this model

Collection including speakleash/Bielik-11B-v2.2-Instruct-MLX-8bit