--- base_model: google/gemma-2-2b-it library_name: transformers license: gemma pipeline_tag: text-generation tags: - conversational - openvino - nncf - fp16 extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- This model is a quantized version of [`google/gemma-2-2b-it`](https://huggingface.co/google/gemma-2-2b-it) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel). First make sure you have `optimum-intel` installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python from optimum.intel import OVModelForCausalLM model_id = "AIFunOver/gemma-2-2b-it-openvino-fp16" model = OVModelForCausalLM.from_pretrained(model_id) ```