Edit model card

Note:

This repo hosts only a Q5_K_S iMatrix of Poppy Porpoise 0.72 L3 8B. GGUF quant is from Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix. The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf.

Downloads last month
88
GGUF
Model size
8.03B params
Architecture
llama
Inference API
Input a message to start chatting with Clevyby/Poppy-Porpoise-0.72-L3-8B-Q5_K_S-GGUF-iMatrix.
This model can be loaded on Inference API (serverless).