Edit model card

Access Gemma on Hugging Face

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately.

Log in or Sign Up to review the conditions and access this model content.

Fihade/gemma-7b-it-q

This model was converted to MLX format from google/gemma-7b-it. Refer to the original model card for more details on the model.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Fihade/gemma-7b-it-q")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
0
Safetensors
Model size
2B params
Tensor type
FP16
·
U32
·
Inference API
Input a message to start chatting with Fihade/gemma-7b-it-q.
This model can be loaded on Inference API (serverless).