mlx-community/llava-1.5-7b-8bit
This model was converted to MLX format from llava-hf/llava-1.5-7b-hf
using mlx-vlm version 0.1.0.
Refer to the original model card for more details on the model.
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/llava-1.5-7b-8bit --max-tokens 100 --temp 0.0
- Downloads last month
- 23
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model authors have turned it off explicitly.