Update README.md
Browse files
README.md
CHANGED
@@ -8,8 +8,39 @@ tags:
|
|
8 |
- add
|
9 |
---
|
10 |
|
11 |
-
The SDXL Turbo model is converted to OpenVINO for the fast inference on CPU. This model is intended for research
|
12 |
|
13 |
Original Model : [sdxl-turbo](https://huggingface.co/stabilityai/sdxl-turbo)
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
The SDXL Turbo Model is licensed under the Stability AI Non-Commercial Research Community License, Copyright (c) Stability AI Ltd. All Rights Reserved.
|
|
|
8 |
- add
|
9 |
---
|
10 |
|
11 |
+
The SDXL Turbo model is converted to OpenVINO for the fast inference on CPU. This model is intended for research purposes only.
|
12 |
|
13 |
Original Model : [sdxl-turbo](https://huggingface.co/stabilityai/sdxl-turbo)
|
14 |
|
15 |
+
You can use this model with [FastSD CPU](https://github.com/rupeshs/fastsdcpu).
|
16 |
+
|
17 |
+
![Sample](./out_image.png)
|
18 |
+
|
19 |
+
To run the model yourself, you can leverage the 🧨 Diffusers library:
|
20 |
+
|
21 |
+
1. Install the dependencies:
|
22 |
+
```
|
23 |
+
pip install optimum-intel openvino diffusers onnx
|
24 |
+
```
|
25 |
+
2. Run the model:
|
26 |
+
```py
|
27 |
+
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionXLPipeline
|
28 |
+
|
29 |
+
pipeline = OVStableDiffusionXLPipeline.from_pretrained(
|
30 |
+
"rupeshs/sdxl-turbo-openvino-int8",
|
31 |
+
ov_config={"CACHE_DIR": ""},
|
32 |
+
)
|
33 |
+
prompt = "Teddy bears working on new AI research on the moon in the 1980s"
|
34 |
+
|
35 |
+
images = pipeline(
|
36 |
+
prompt=prompt,
|
37 |
+
width=512,
|
38 |
+
height=512,
|
39 |
+
num_inference_steps=1,
|
40 |
+
guidance_scale=1.0,
|
41 |
+
).images
|
42 |
+
images[0].save("out_image.png")
|
43 |
+
```
|
44 |
+
|
45 |
+
## License
|
46 |
The SDXL Turbo Model is licensed under the Stability AI Non-Commercial Research Community License, Copyright (c) Stability AI Ltd. All Rights Reserved.
|