Edit model card

DeepSeek-V2.5-GGUF

Original Model

deepseek-ai/DeepSeek-V2.5

Run with LlamaEdge

  • LlamaEdge version: coming soon
  • Prompt template

    • Prompt type: deepseek-chat-25

    • Prompt string

      <|begin_of_sentence|>{system_message}<|User|>{user_message_1}<|Assistant|>{assistant_message_1}<|end_of_sentence|><|User|>{user_message_2}<|Assistant|>
      
  • Context size: 128000

  • Run as LlamaEdge service

    wasmedge --dir .:. \
      --nn-preload default:GGML:AUTO:DeepSeek-V2.5-Q5_K_M.gguf \
      llama-api-server.wasm \
      --prompt-template deepseek-chat-25 \
      --ctx-size 128000 \
      --model-name DeepSeek-V2.5
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. \
      --nn-preload default:GGML:AUTO:DeepSeek-V2.5-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template deepseek-chat-25 \
      --ctx-size 128000
    

Quatized with llama.cpp b3664

Downloads last month
674
GGUF
Model size
236B params
Architecture
deepseek2

2-bit

3-bit

4-bit

5-bit

16-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for second-state/DeepSeek-V2.5-GGUF

Quantized
(8)
this model