Edit model card

seawolf2357/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF

This model was converted to GGUF format from meta-llama/Meta-Llama-3-8B-Instruct using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo seawolf2357/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --model meta-llama-3-8b-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo seawolf2357/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF --model meta-llama-3-8b-instruct.Q4_K_M.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m meta-llama-3-8b-instruct.Q4_K_M.gguf -n 128
Downloads last month
21
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.