LlamaGuard-7b-GGUF / README.md
wenqiglantz's picture
Update README.md
09065e0
metadata
language:
  - en
tags:
  - facebook
  - meta
  - pytorch
  - llama
  - llama-2
license: llama2

Model Card for LlamaGuard-7b-GGUF

This is a quantized model for meta-llama/LlamaGuard-7b. Two quantization methods were used:

  • Q5_K_M: 5-bit, preserves most of the model's performance
  • Q4_K_M: 4-bit, smaller footprints and saves more memory

Model Details

Model Description

Refer to details from [Meta's official model card]: (https://huggingface.co/meta-llama/LlamaGuard-7b).