Edit model card

Llama-3.2-1B-Instruct-GGUF

Original Model

meta-llama/Llama-3.2-1B-Instruct

Run with LlamaEdge

  • LlamaEdge version: v0.14.5 and above

  • Prompt template

    • Prompt type: llama-3-chat

    • Prompt string

      <|begin_of_text|><|start_header_id|>system<|end_header_id|>
      
      {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
      
      {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
      
      {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
      
      {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
      
  • Context size: 128000

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-1B-Instruct-Q5_K_M.gguf \
        llama-api-server.wasm \
        --prompt-template llama-3-chat \
        --ctx-size 128000 \
        --model-name Llama-3.2-1b
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-1B-Instruct-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template llama-3-chat \
      --ctx-size 128000
    

Quantized GGUF Models

Name Quant method Bits Size Use case
Llama-3.2-1B-Instruct-Q2_K.gguf Q2_K 2 581 MB smallest, significant quality loss - not recommended for most purposes
Llama-3.2-1B-Instruct-Q3_K_L.gguf Q3_K_L 3 733 MB small, substantial quality loss
Llama-3.2-1B-Instruct-Q3_K_M.gguf Q3_K_M 3 691 MB very small, high quality loss
Llama-3.2-1B-Instruct-Q3_K_S.gguf Q3_K_S 3 642 MB very small, high quality loss
Llama-3.2-1B-Instruct-Q4_0.gguf Q4_0 4 771 MB legacy; small, very high quality loss - prefer using Q3_K_M
Llama-3.2-1B-Instruct-Q4_K_M.gguf Q4_K_M 4 808 MB medium, balanced quality - recommended
Llama-3.2-1B-Instruct-Q4_K_S.gguf Q4_K_S 4 776 MB small, greater quality loss
Llama-3.2-1B-Instruct-Q5_0.gguf Q5_0 5 893 MB legacy; medium, balanced quality - prefer using Q4_K_M
Llama-3.2-1B-Instruct-Q5_K_M.gguf Q5_K_M 5 912 MB large, very low quality loss - recommended
Llama-3.2-1B-Instruct-Q5_K_S.gguf Q5_K_S 5 893 MB large, low quality loss - recommended
Llama-3.2-1B-Instruct-Q6_K.gguf Q6_K 6 1.02 GB very large, extremely low quality loss
Llama-3.2-1B-Instruct-Q8_0.gguf Q8_0 8 1.32 GB very large, extremely low quality loss - not recommended
Llama-3.2-1B-Instruct-f16.gguf f16 16 2.48 GB

Quantized with llama.cpp b3807

Downloads last month
283
GGUF
Model size
1.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for second-state/Llama-3.2-1B-Instruct-GGUF

Quantized
(146)
this model

Collection including second-state/Llama-3.2-1B-Instruct-GGUF