Stable Diffusion 1.5 for ONNX Runtime CUDA provider

Introduction

This repository hosts the optimized onnx models of Stable Diffusion 1.5 to accelerate inference with ONNX Runtime CUDA execution provider on Nvidia GPUs. It cannot run in other execution providers like CPU or DirectML.

The models are generated by Olive with command like the following:

python stable_diffusion.py --provider cuda --optimize

Model Description

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for tlwu/stable-diffusion-v1-5-onnxruntime

Quantized
(2)
this model

Collection including tlwu/stable-diffusion-v1-5-onnxruntime