Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q2_k-gguf
This model was converted to GGUF format from GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
using llama.cpp.
Refer to the original model card for more details on the model.
Use with llama.cpp
CLI:
llama-cli --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q2_k-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q2_k.gguf -p "Your prompt here"
Server:
llama-server --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q2_k-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q2_k.gguf -c 2048
Model Details
- Quantization Type: q2_k
- Original Model: GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
- Format: GGUF
- Downloads last month
- 21
Model tree for Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q2_k-gguf
Base model
aisingapore/llama3-8b-cpt-sea-lionv2-base