4-bit AWQ-quantized version of HuggingFaceM4/idefics2-8b. Refer to the original model's card for more information (including inference snippet).

Downloads last month
160
Safetensors
Model size
2.36B params
Tensor type
I32
·
FP16
·
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Datasets used to train HuggingFaceM4/idefics2-8b-AWQ

Collection including HuggingFaceM4/idefics2-8b-AWQ