4-bit AWQ-quantized version of HuggingFaceM4/idefics2-8b-chatty. Refer to the original model's card for more information (including inference snippet).

Downloads last month
45
Safetensors
Model size
2.36B params
Tensor type
I32
·
FP16
·
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Datasets used to train HuggingFaceM4/idefics2-8b-chatty-AWQ

Collection including HuggingFaceM4/idefics2-8b-chatty-AWQ