This is a merge of the vision adapters from meta-llama/Llama-3.2-11B-Vision-Instruct onto mlabonne/Hermes-3-Llama-3.1-8B-lorablated.

Please respect the respective licenses of Meta Llama & Nous Research.

The method I used is detailed in this post. I also merged the tokenizer and generation configs. Example python code for weight merging is available in merge_vision_example.py, which works for both 11B and 90B.

This 11B merge is less stable than the 90B (which is very stable). Keep temperature <= 0.7.

The 90B version of this merge is available here.

Downloads last month
9
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.