mlx-community/dolphin-vision-72b-4bit

This model was converted to MLX format from cognitivecomputations/dolphin-vision-72b using mlx-vlm version 0.0.11. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model  mlx-community/dolphin-vision-72b-4bit --max-tokens 100 --temp 0.0
Downloads last month
52
Safetensors
Model size
11.4B params
Tensor type
FP16
·
U32
·
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for mlx-community/dolphin-vision-72b-4bit

Base model

Qwen/Qwen2-72B
Finetuned
(14)
this model

Datasets used to train mlx-community/dolphin-vision-72b-4bit