ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF

This model was converted to GGUF format from ruslanmv/Medical-Llama3-v2 using llama.cpp via Convert Model to GGUF.

Key Features:

  • Quantized for reduced file size (GGUF format)
  • Optimized for use with llama.cpp
  • Compatible with llama-server for efficient serving

Refer to the original model card for more details on the base model.

Usage with llama.cpp

1. Install llama.cpp:

brew install llama.cpp  # For macOS/Linux

2. Run Inference:

CLI:

llama-cli --hf-repo ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF --hf-file medical-llama3-v2-q4_k_m.gguf -p "Your prompt here"

Server:

llama-server --hf-repo ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF --hf-file medical-llama3-v2-q4_k_m.gguf -c 2048

For more advanced usage, refer to the llama.cpp repository.

Downloads last month
3
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference API
Unable to determine this model's library. Check the docs .