DolphinVision 72b - 4.0bpw EXL2 🐬

Base model: cognitivecomputations/dolphin-vision-72b

Language model quantized to 4.0bpw with FP16 vision layers merged back in.

Text working in exllamav2/tabbyapi. Vision input not working yet.

n.b. architecture in config.json has been changed from "BunnyQwenForCausalLM" to "Qwen2ForCausalLM" to prevent model from being loaded as llama in tabbyapi.

Downloads last month
19
Safetensors
Model size
10.8B params
Tensor type
I32
·
BF16
·
FP16
·
I16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for nintwentydo/dolphin-vision-72b-4.0bpw-h6-exl2

Base model

Qwen/Qwen2-72B
Quantized
(2)
this model

Datasets used to train nintwentydo/dolphin-vision-72b-4.0bpw-h6-exl2