models from Olive
Browse files- ORT_CUDA/sd-xl-base-1.0/engine/clip2.ort_cuda.fp16/model.onnx.data +0 -3
- ORT_CUDA/sd-xl-refiner-1.0/engine/unetxl.ort_cuda.fp16/model.onnx.data +0 -3
- ORT_CUDA/sd-xl-refiner-1.0/engine/vae.ort_cuda.fp16/model.onnx +0 -3
- README.md +13 -13
- model_index.json +46 -0
- scheduler/scheduler_config.json +21 -0
- {ORT_CUDA/sd-xl-base-1.0/engine/clip.ort_cuda.fp16 β text_encoder}/model.onnx +2 -2
- {ORT_CUDA/sd-xl-base-1.0/engine/unetxl.ort_cuda.fp16 β text_encoder_2}/model.onnx +2 -2
- {ORT_CUDA/sd-xl-refiner-1.0/engine/clip2.ort_cuda.fp16 β text_encoder_2}/model.onnx.data +2 -2
- tokenizer/merges.txt +0 -0
- tokenizer/special_tokens_map.json +24 -0
- tokenizer/tokenizer_config.json +30 -0
- tokenizer/vocab.json +0 -0
- tokenizer_2/merges.txt +0 -0
- tokenizer_2/special_tokens_map.json +24 -0
- tokenizer_2/tokenizer_config.json +38 -0
- tokenizer_2/vocab.json +0 -0
- {ORT_CUDA/sd-xl-refiner-1.0/engine/unetxl.ort_cuda.fp16 β unet}/model.onnx +2 -2
- {ORT_CUDA/sd-xl-base-1.0/engine/unetxl.ort_cuda.fp16 β unet}/model.onnx.data +1 -1
- {ORT_CUDA/sd-xl-base-1.0/engine/clip2.ort_cuda.fp16 β vae_decoder}/model.onnx +2 -2
- {ORT_CUDA/sd-xl-refiner-1.0/engine/clip2.ort_cuda.fp16 β vae_encoder}/model.onnx +2 -2
ORT_CUDA/sd-xl-base-1.0/engine/clip2.ort_cuda.fp16/model.onnx.data
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:f928e0fb8826f641d36a14760546caa23d97366db4c15de1a5188802bd21e97e
|
3 |
-
size 1389319680
|
|
|
|
|
|
|
|
ORT_CUDA/sd-xl-refiner-1.0/engine/unetxl.ort_cuda.fp16/model.onnx.data
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:94919ef99ac87418ad84b94ddc8581fc330390b3e3fd8839eee649b630eda78f
|
3 |
-
size 4519331328
|
|
|
|
|
|
|
|
ORT_CUDA/sd-xl-refiner-1.0/engine/vae.ort_cuda.fp16/model.onnx
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:bbcf032f2aef098c298c82cff7e66160a8bc0937b99943914e8f75575a3a50fe
|
3 |
-
size 99070466
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -18,6 +18,11 @@ tags:
|
|
18 |
|
19 |
This repository hosts the optimized versions of **Stable Diffusion XL 1.0** to accelerate inference with ONNX Runtime CUDA execution provider.
|
20 |
|
|
|
|
|
|
|
|
|
|
|
21 |
See the [usage instructions](#usage-example) for how to run the SDXL pipeline with the ONNX files hosted in this repository.
|
22 |
|
23 |
## Model Description
|
@@ -35,16 +40,12 @@ The VAE decoder is converted from [sdxl-vae-fp16-fix](https://huggingface.co/mad
|
|
35 |
|
36 |
Below is average latency of generating an image of size 1024x1024 using NVIDIA A100-SXM4-80GB GPU:
|
37 |
|
38 |
-
|
|
39 |
-
|
40 |
-
|
|
41 |
-
|
|
42 |
-
| Dynamic | 1 | 3779 ms | 3458 ms |
|
43 |
-
| Dynamic | 4 | 13504 ms | 12347 ms |
|
44 |
|
45 |
-
|
46 |
-
|
47 |
-
Dynamic means the engine is built to support dynamic batch size and image sizes.
|
48 |
|
49 |
## Usage Example
|
50 |
|
@@ -82,6 +83,8 @@ sh build.sh --config Release --build_shared_lib --parallel --use_cuda --cuda_ve
|
|
82 |
python3 -m pip install build/Linux/Release/dist/onnxruntime_gpu-*-cp310-cp310-linux_x86_64.whl --force-reinstall
|
83 |
```
|
84 |
|
|
|
|
|
85 |
5. Install libraries and requirements
|
86 |
```shell
|
87 |
python3 -m pip install --upgrade pip
|
@@ -94,8 +97,5 @@ python3 -m pip install --upgrade polygraphy onnx-graphsurgeon --extra-index-url
|
|
94 |
```shell
|
95 |
python3 demo_txt2img_xl.py \
|
96 |
"starry night over Golden Gate Bridge by van gogh" \
|
97 |
-
--
|
98 |
-
--height 1024 \
|
99 |
-
--denoising-steps 8 \
|
100 |
-
--work-dir /workspace/stable-diffusion-xl-1.0-onnxruntime
|
101 |
```
|
|
|
18 |
|
19 |
This repository hosts the optimized versions of **Stable Diffusion XL 1.0** to accelerate inference with ONNX Runtime CUDA execution provider.
|
20 |
|
21 |
+
The models are generated by [Olive](https://github.com/microsoft/Olive/tree/main/examples/stable_diffusion) with command like the following:
|
22 |
+
```
|
23 |
+
python stable_diffusion_xl.py --provider cuda --optimize --use_fp16_fixed_vae
|
24 |
+
```
|
25 |
+
|
26 |
See the [usage instructions](#usage-example) for how to run the SDXL pipeline with the ONNX files hosted in this repository.
|
27 |
|
28 |
## Model Description
|
|
|
40 |
|
41 |
Below is average latency of generating an image of size 1024x1024 using NVIDIA A100-SXM4-80GB GPU:
|
42 |
|
43 |
+
| Batch Size | PyTorch 2.1 | ONNX Runtime CUDA |
|
44 |
+
|------------|----------------|-------------------|
|
45 |
+
| 1 | 3779 ms | 3389 ms |
|
46 |
+
| 4 | 13504 ms | 12264 ms |
|
|
|
|
|
47 |
|
48 |
+
In this test, CUDA graph was used to speed up in both torch compile the unet and ONNX Runtime.
|
|
|
|
|
49 |
|
50 |
## Usage Example
|
51 |
|
|
|
83 |
python3 -m pip install build/Linux/Release/dist/onnxruntime_gpu-*-cp310-cp310-linux_x86_64.whl --force-reinstall
|
84 |
```
|
85 |
|
86 |
+
If the GPU is not A100, change CMAKE_CUDA_ARCHITECTURES=80 in the command line according to the GPU compute capacity (like 89 for RTX 4090, or 86 for RTX 3090). If your machine has less than 64GB memory, replace --parallel by --parallel 4 --nvcc_threads 1 to avoid out of memory.
|
87 |
+
|
88 |
5. Install libraries and requirements
|
89 |
```shell
|
90 |
python3 -m pip install --upgrade pip
|
|
|
97 |
```shell
|
98 |
python3 demo_txt2img_xl.py \
|
99 |
"starry night over Golden Gate Bridge by van gogh" \
|
100 |
+
--engine-dir /workspace/stable-diffusion-xl-1.0-onnxruntime
|
|
|
|
|
|
|
101 |
```
|
model_index.json
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ORTStableDiffusionXLPipeline",
|
3 |
+
"_diffusers_version": "0.24.0",
|
4 |
+
"_name_or_path": "stabilityai/stable-diffusion-xl-base-1.0",
|
5 |
+
"feature_extractor": [
|
6 |
+
null,
|
7 |
+
null
|
8 |
+
],
|
9 |
+
"force_zeros_for_empty_prompt": true,
|
10 |
+
"image_encoder": [
|
11 |
+
null,
|
12 |
+
null
|
13 |
+
],
|
14 |
+
"scheduler": [
|
15 |
+
"diffusers",
|
16 |
+
"EulerDiscreteScheduler"
|
17 |
+
],
|
18 |
+
"text_encoder": [
|
19 |
+
"diffusers",
|
20 |
+
"OnnxRuntimeModel"
|
21 |
+
],
|
22 |
+
"text_encoder_2": [
|
23 |
+
"diffusers",
|
24 |
+
"OnnxRuntimeModel"
|
25 |
+
],
|
26 |
+
"tokenizer": [
|
27 |
+
"transformers",
|
28 |
+
"CLIPTokenizer"
|
29 |
+
],
|
30 |
+
"tokenizer_2": [
|
31 |
+
"transformers",
|
32 |
+
"CLIPTokenizer"
|
33 |
+
],
|
34 |
+
"unet": [
|
35 |
+
"diffusers",
|
36 |
+
"OnnxRuntimeModel"
|
37 |
+
],
|
38 |
+
"vae_decoder": [
|
39 |
+
"diffusers",
|
40 |
+
"OnnxRuntimeModel"
|
41 |
+
],
|
42 |
+
"vae_encoder": [
|
43 |
+
"diffusers",
|
44 |
+
"OnnxRuntimeModel"
|
45 |
+
]
|
46 |
+
}
|
scheduler/scheduler_config.json
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "EulerDiscreteScheduler",
|
3 |
+
"_diffusers_version": "0.24.0",
|
4 |
+
"beta_end": 0.012,
|
5 |
+
"beta_schedule": "scaled_linear",
|
6 |
+
"beta_start": 0.00085,
|
7 |
+
"clip_sample": false,
|
8 |
+
"interpolation_type": "linear",
|
9 |
+
"num_train_timesteps": 1000,
|
10 |
+
"prediction_type": "epsilon",
|
11 |
+
"sample_max_value": 1.0,
|
12 |
+
"set_alpha_to_one": false,
|
13 |
+
"sigma_max": null,
|
14 |
+
"sigma_min": null,
|
15 |
+
"skip_prk_steps": true,
|
16 |
+
"steps_offset": 1,
|
17 |
+
"timestep_spacing": "leading",
|
18 |
+
"timestep_type": "discrete",
|
19 |
+
"trained_betas": null,
|
20 |
+
"use_karras_sigmas": false
|
21 |
+
}
|
{ORT_CUDA/sd-xl-base-1.0/engine/clip.ort_cuda.fp16 β text_encoder}/model.onnx
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:661f552b00c76982e4a8e5d7a8814c33ff9354fb0298666ec875a1762f6d5076
|
3 |
+
size 246178359
|
{ORT_CUDA/sd-xl-base-1.0/engine/unetxl.ort_cuda.fp16 β text_encoder_2}/model.onnx
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6b9f04f2d71f0ae8cbb085a78e1e65e2509607d694a84c15dde6de1ce2db58e0
|
3 |
+
size 1389427378
|
{ORT_CUDA/sd-xl-refiner-1.0/engine/clip2.ort_cuda.fp16 β text_encoder_2}/model.onnx.data
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3da7ac65349fbd092e836e3eeca2c22811317bc804fd70af157b4550f2d4bcb5
|
3 |
+
size 2778639360
|
tokenizer/merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer/special_tokens_map.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<|startoftext|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": true,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": true,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": "<|endoftext|>",
|
17 |
+
"unk_token": {
|
18 |
+
"content": "<|endoftext|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": true,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
}
|
24 |
+
}
|
tokenizer/tokenizer_config.json
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"49406": {
|
5 |
+
"content": "<|startoftext|>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": true,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"49407": {
|
13 |
+
"content": "<|endoftext|>",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": true,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": true
|
19 |
+
}
|
20 |
+
},
|
21 |
+
"bos_token": "<|startoftext|>",
|
22 |
+
"clean_up_tokenization_spaces": true,
|
23 |
+
"do_lower_case": true,
|
24 |
+
"eos_token": "<|endoftext|>",
|
25 |
+
"errors": "replace",
|
26 |
+
"model_max_length": 77,
|
27 |
+
"pad_token": "<|endoftext|>",
|
28 |
+
"tokenizer_class": "CLIPTokenizer",
|
29 |
+
"unk_token": "<|endoftext|>"
|
30 |
+
}
|
tokenizer/vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_2/merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_2/special_tokens_map.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<|startoftext|>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": true,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "<|endoftext|>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": true,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": "!",
|
17 |
+
"unk_token": {
|
18 |
+
"content": "<|endoftext|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": true,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
}
|
24 |
+
}
|
tokenizer_2/tokenizer_config.json
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"0": {
|
5 |
+
"content": "!",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": false,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"49406": {
|
13 |
+
"content": "<|startoftext|>",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": true,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": true
|
19 |
+
},
|
20 |
+
"49407": {
|
21 |
+
"content": "<|endoftext|>",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": true,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false,
|
26 |
+
"special": true
|
27 |
+
}
|
28 |
+
},
|
29 |
+
"bos_token": "<|startoftext|>",
|
30 |
+
"clean_up_tokenization_spaces": true,
|
31 |
+
"do_lower_case": true,
|
32 |
+
"eos_token": "<|endoftext|>",
|
33 |
+
"errors": "replace",
|
34 |
+
"model_max_length": 77,
|
35 |
+
"pad_token": "!",
|
36 |
+
"tokenizer_class": "CLIPTokenizer",
|
37 |
+
"unk_token": "<|endoftext|>"
|
38 |
+
}
|
tokenizer_2/vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
{ORT_CUDA/sd-xl-refiner-1.0/engine/unetxl.ort_cuda.fp16 β unet}/model.onnx
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:de912f37c6c90e43bab63adf89515d76290d1eb60208b9a642795e347ff3701e
|
3 |
+
size 736952
|
{ORT_CUDA/sd-xl-base-1.0/engine/unetxl.ort_cuda.fp16 β unet}/model.onnx.data
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5135092480
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7cdae162dbf695bc3abb40653265bdabad17e1d9eef7c4e44beeb2a834b70cdc
|
3 |
size 5135092480
|
{ORT_CUDA/sd-xl-base-1.0/engine/clip2.ort_cuda.fp16 β vae_decoder}/model.onnx
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7987d20deef6934d7d30bd7486da698940765d5383a5ca009f0aad74c737ec70
|
3 |
+
size 99072671
|
{ORT_CUDA/sd-xl-refiner-1.0/engine/clip2.ort_cuda.fp16 β vae_encoder}/model.onnx
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a56f9f96a763bc9995d032d6e03159cf433569047488e7594f0b15066cbed44f
|
3 |
+
size 68412330
|