apepkuss79 commited on
Commit
d256fb2
1 Parent(s): cb3be0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -48
README.md CHANGED
@@ -1,49 +1,57 @@
1
- ---
2
- base_model: stabilityai/stable-diffusion-3-medium
3
- license: other
4
- license_name: stabilityai-ai-community
5
- license_link: LICENSE.md
6
- model_creator: stabilityai
7
- model_name: stable-diffusion-3-medium
8
- quantized_by: Second State Inc.
9
- tags:
10
- - text-to-image
11
- - stable-diffusion
12
- - diffusion-single-file
13
- inference: false
14
- language:
15
- - en
16
- pipeline_tag: text-to-image
17
- ---
18
-
19
- <!-- header start -->
20
- <!-- 200823 -->
21
- <div style="width: auto; margin-left: auto; margin-right: auto">
22
- <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
23
- </div>
24
- <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
25
- <!-- header end -->
26
-
27
- # stable-diffusion-3-medium-GGUF
28
-
29
- ## Original Model
30
-
31
- [stabilityai/stable-diffusion-3-medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium)
32
-
33
- ## Run with `sd-api-server`
34
-
35
- Go to the [sd-api-server](https://github.com/LlamaEdge/sd-api-server/blob/main/README.md) repository for more information.
36
-
37
- ## Quantized GGUF Models
38
-
39
- | Name | Quant method | Bits | Size | Use case |
40
- | ---- | ---- | ---- | ---- | ----- |
41
- | [sd3-medium-Q4_0.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q4_0.gguf) | Q4_0 | 4 | 4.55 GB | |
42
- | [sd3-medium-Q4_1.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q4_1.gguf) | Q4_1 | 4 | 5.04 GB | |
43
- | [sd3-medium-Q5_0.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q5_0.gguf) | Q5_0 | 5 | 5.53 GB | |
44
- | [sd3-medium-Q5_1.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q5_1.gguf) | Q5_1 | 5 | 6.03 GB | |
45
- | [sd3-medium-Q8_0.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q8_0.gguf) | Q8_0 | 8 | 8.45 GB | |
46
- | [sd3-medium-f16.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-f16.gguf) | f16 | 16 | 15.8 GB | |
47
- | [sd3-medium-f32.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-f32.gguf) | f32 | 32 | 31.5 GB | |
48
-
 
 
 
 
 
 
 
 
49
  **Quantized with stable-diffusion.cpp `master-697d000`.**
 
1
+ ---
2
+ base_model: stabilityai/stable-diffusion-3-medium
3
+ license: other
4
+ license_name: stabilityai-ai-community
5
+ license_link: LICENSE.md
6
+ model_creator: stabilityai
7
+ model_name: stable-diffusion-3-medium
8
+ quantized_by: Second State Inc.
9
+ tags:
10
+ - text-to-image
11
+ - stable-diffusion
12
+ - diffusion-single-file
13
+ inference: false
14
+ language:
15
+ - en
16
+ pipeline_tag: text-to-image
17
+ ---
18
+
19
+ <!-- header start -->
20
+ <!-- 200823 -->
21
+ <div style="width: auto; margin-left: auto; margin-right: auto">
22
+ <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
23
+ </div>
24
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
25
+ <!-- header end -->
26
+
27
+ # stable-diffusion-3-medium-GGUF
28
+
29
+ ## Original Model
30
+
31
+ [stabilityai/stable-diffusion-3-medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium)
32
+
33
+ ## Run with LlamaEdge-StableDiffusion
34
+
35
+ - Version: [v0.2.0](https://github.com/LlamaEdge/sd-api-server/releases/tag/0.2.0)
36
+
37
+ - Run as LlamaEdge service
38
+
39
+ ```bash
40
+ wasmedge --dir .:. sd-api-server.wasm \
41
+ --model-name sd-3-medium \
42
+ --model sd3-medium-Q5_0.gguf
43
+ ```
44
+
45
+ ## Quantized GGUF Models
46
+
47
+ | Name | Quant method | Bits | Size | Use case |
48
+ | ---- | ---- | ---- | ---- | ----- |
49
+ | [sd3-medium-Q4_0.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q4_0.gguf) | Q4_0 | 4 | 4.55 GB | |
50
+ | [sd3-medium-Q4_1.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q4_1.gguf) | Q4_1 | 4 | 5.04 GB | |
51
+ | [sd3-medium-Q5_0.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q5_0.gguf) | Q5_0 | 5 | 5.53 GB | |
52
+ | [sd3-medium-Q5_1.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q5_1.gguf) | Q5_1 | 5 | 6.03 GB | |
53
+ | [sd3-medium-Q8_0.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-Q8_0.gguf) | Q8_0 | 8 | 8.45 GB | |
54
+ | [sd3-medium-f16.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-f16.gguf) | f16 | 16 | 15.8 GB | |
55
+ | [sd3-medium-f32.gguf](https://huggingface.co/second-state/stable-diffusion-3-medium-GGUF/blob/main/sd3-medium-f32.gguf) | f32 | 32 | 31.5 GB | |
56
+
57
  **Quantized with stable-diffusion.cpp `master-697d000`.**