--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf - 4bit inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.2-Instruct ---

# Bielik-11B-v2.2-Instruct-MLX-4bit This model was converted to MLX format from [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). **DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!** ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("speakleash/Bielik-11B-v2.2-Instruct-MLX-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ``` ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)SpeakLeash - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).