Strange behaviour of Llama3.2-vision - it behaves like text model
1
#9 opened 3 days ago
by
jirkazcech
How to use it in ollama
1
#8 opened 7 days ago
by
vejahetobeu
Exporting to GGUF
5
#7 opened 21 days ago
by
krasivayakoshka
Training with images
4
#6 opened about 1 month ago
by
Khawn2u
AttributeError: Model MllamaForConditionalGeneration does not support BitsAndBytes quantization yet.
1
#5 opened about 2 months ago
by
luizhsalazar
How much vram needed?
3
#4 opened 2 months ago
by
Dizzl500
How load this model?
3
#3 opened 3 months ago
by
benTow07
Can you post the script that was used to quantize this model please?
10
#2 opened 3 months ago
by
ctranslate2-4you