--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf - 4bit inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.2-Instruct ---

# Bielik-11B-v2.2-Instruct-Quanto-4bit This model was converted to Quanto format from [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). **DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!** ## About Quanto Optimum Quanto is a pytorch quantization backend for optimum. Model can be loaded using: ``` from optimum.quanto import QuantizedModelForCausalLM qmodel = QuantizedModelForCausalLM.from_pretrained('speakleash/Bielik-11B-v2.2-Instruct-Quanto-4bit') ``` ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)SpeakLeash - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).