File size: 171 Bytes
849ab0c
 
 
 
 
 
 
1
2
3
4
5
6
7
8
---
license: apache-2.0
---

The model is quantized using https://github.com/WanBenLe/AutoAWQ-with-llava-v1.6.git

The source model is `llava-hf/llava-v1.6-mistral-7b-hf`