stable-diffusion-3-medium-GGUF

Original Model

stabilityai/stable-diffusion-3-medium

Run with LlamaEdge-StableDiffusion

  • Version: v0.2.0

  • Run as LlamaEdge service

    wasmedge --dir .:. sd-api-server.wasm \
      --model-name sd-3-medium \
      --model sd3-medium-Q5_0.gguf
    

Quantized GGUF Models

Name Quant method Bits Size Use case
sd3-medium-Q4_0.gguf Q4_0 4 4.55 GB
sd3-medium-Q4_1.gguf Q4_1 4 5.04 GB
sd3-medium-Q5_0.gguf Q5_0 5 5.53 GB
sd3-medium-Q5_1.gguf Q5_1 5 6.03 GB
sd3-medium-Q8_0.gguf Q8_0 8 8.45 GB
sd3-medium-f16.gguf f16 16 15.8 GB
sd3-medium-f32.gguf f32 32 31.5 GB

Quantized with stable-diffusion.cpp master-697d000.

Downloads last month
596
GGUF
Model size
7.88B params
Architecture
undefined

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for second-state/stable-diffusion-3-medium-GGUF

Quantized
(3)
this model