GreenBitAI/Qwen-2.5-72B-Instruct-layer-mix-bpw-4.0-mlx

This quantized low-bit model GreenBitAI/Qwen-2.5-72B-Instruct-layer-mix-bpw-4.0-mlx was converted to MLX format from GreenBitAI/Qwen-2.5-72B-Instruct-layer-mix-bpw-4.0 using gbx-lm version 0.3.4. Refer to the original model card for more details on the model.

Use with mlx

pip install gbx-lm
from gbx_lm import load, generate

model, tokenizer = load("GreenBitAI/Qwen-2.5-72B-Instruct-layer-mix-bpw-4.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
15
Safetensors
Model size
12.4B params
Tensor type
FP16
I16
U32
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for GreenBitAI/Qwen-2.5-72B-Instruct-layer-mix-bpw-4.0-mlx

Finetuned
(1)
this model