--- license: llama3.2 base_model: - meta-llama/Llama-3.2-11B-Vision-Instruct - invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B base_model_relation: merge language: - en pipeline_tag: image-text-to-text tags: - not-for-all-audiences - llama - llama-3 - vision - multimodal --- Quick experiment merging Llama 3.2 11B Vision adapters onto a Llama 3.1 finetune using Grimulkan's fantastic script outlined [here](https://www.reddit.com/r/LocalLLaMA/comments/1fzduyx/merging_llama_32_vision_adapters_onto_31_finetunes/). Vision adapters: [meta-llama/Llama-3.2-11B-Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision) Language model: [invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B](https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B) If you have issues with the tokenizer use the Llama 3.2 11B tokenizer. If you want to use system messages with image inputs use the template in chat_template.json. No guarantees it'll work properly but seems to be ok for me so far. Sampling params seem to be a bit finnicky, currently testing out different settings and would love to hear what works for y'all.