stable-diffusion-v-1-4-GGUF

Original Model

CompVis/stable-diffusion-v-1-4-original

Run with LlamaEdge-StableDiffusion

  • Version: v0.2.0

  • Run as LlamaEdge service

    wasmedge --dir .:. sd-api-server.wasm \
      --model-name sd-v1.4 \
      --model stable-diffusion-v1-4-Q8_0.gguf
    

Quantized GGUF Models

Using formats of different precisions will yield results of varying quality.

f32 f16 q8_0 q5_0 q5_1 q4_0 q4_1
Name Quant method Bits Size Use case
stable-diffusion-v1-4-Q4_0.gguf Q4_0 2 1.57 GB
stable-diffusion-v1-4-Q4_1.gguf Q4_1 3 1.59 GB
stable-diffusion-v1-4-Q5_0.gguf Q5_0 3 1.62 GB
stable-diffusion-v1-4-Q5_1.gguf Q5_1 3 1.64 GB
stable-diffusion-v1-4-Q8_0.gguf Q8_0 4 1.76 GB
stable-diffusion-v1-4-f16.gguf f16 4 2.13 GB
stable-diffusion-v1-4-f32.gguf f32 4 4.27 GB
Downloads last month
720
GGUF
Model size
1.07B params
Architecture
undefined

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for second-state/stable-diffusion-v-1-4-GGUF

Quantized
(1)
this model