deinferno's picture
Upload model, inference pipeline and example
2c9efe6
|
raw
history blame
2.54 kB
metadata
license: mit
language:
  - en
pipeline_tag: text-to-image
tags:
  - openvino
  - text-to-image

Model Descriptions:

This repo contains OpenVino model files for SimianLuo's LCM_Dreamshaper_v7.

Generation Results:

By converting model to OpenVino format and using Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz 24C/48T x 2 we can achieve following results compared to original PyTorch LCM.

Results time includes first compile and reshape phases and should be taken with grain of salt because benchmark was run using 2 socketed server which can underperform in those types of workload.

Number of images per batch is set to 1

Run No. Pytorch OpenVino Openvino w/reshape
1 15.5841 18.0010 13.4928
2 12.4634 5.0208 3.6855
3 12.1551 4.9462 3.7228

Number of images per batch is set to 4

Run No. Pytorch OpenVino Openvino w/reshape
1 31.3666 33.1488 25.7044
2 33.4797 17.7456 12.8295
3 28.6561 17.9216 12.7198

To run the model yourself, you can leverage the 🧨 Diffusers/🤗 Optimum library:

  1. Install the library:
pip install diffusers transformers accelerate optimum 
pip install --upgrade-strategy eager optimum[openvino]
  1. Clone inference code:
git clone https://huggingface.co/deinferno/LCM_Dreamshaper_v7-openvino
cd LCM_Dreamshaper_v7-openvino
  1. Run the model:
from lcm_ov_pipeline import OVLatentConsistencyModelPipeline
from lcm_scheduler import LCMScheduler

model_id = "deinferno/LCM_Dreamshaper_v7-openvino"

scheduler = LCMScheduler.from_pretrained(model_id, subfolder = "scheduler")
pipe = OVLatentConsistencyModelPipeline.from_pretrained(model_id, scheduler = scheduler, compile = False) # Enable if you don't plan to reshape and recompile

prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"

# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.

width = 512
height = 512
num_images = 1
batch_size = 1
num_inference_steps = 4

# Reshape and recompile for inference speed

pipe.reshape(batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images)
pipe.compile()

images = pipe(prompt=prompt, width=width, height=height, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images