Issue Loading Llama-3.2 (11B) Vision Instruct Model in Colab
Hello Unslooth Team,
I am attempting to load the Llama-3.2 (11B)-Vision-Instruct
model in Colab, but I am encountering the following error:
RuntimeError: The checkpoint you are trying to load has model type `mllama` but Transformers does not recognize this model type.
Could you please confirm if this model is supported on Colab, or if there are specific configurations or dependencies I might be missing? I have updated the necessary libraries, but the issue persists.
I would appreciate your assistance in resolving this.
Best regards,
[Enes Tura]
Currently Llama 3.2-11B is not supported but we are working on it and will hopefully have it done by next week
is there any update on this?
is there any update on this?
Still in the works - apologies.
hello , is there any update on this model ? or can you please inform me whenever the update model comes...
hello , is there any update on this model ? or can you please inform me whenever the update model comes...
Yes we will let you guys know once it's supported so we can update all the model cards many thanks!
thank you :)
Hello, is there any progress?
Hello, is there any progress ?
Yes. It's done and ready to go but we need to announce it so this week for sure! :) I will notify you all