This is the 7b Qwen2-VL image model exported via https://github.com/pdufour/llm-export.

Also see https://huggingface.co/pdufour/Qwen2-VL-2B-Instruct-ONNX-Q4-F16 for a 2b model that is onnxruntime-webgpu compatible.

Downloads last month
17
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support image-text-to-text models for transformers.js library.

Model tree for pdufour/Qwen2-VL-7B-Instruct-onnx

Base model

Qwen/Qwen2-VL-7B
Quantized
(50)
this model

Collection including pdufour/Qwen2-VL-7B-Instruct-onnx