|
--- |
|
license: openrail++ |
|
--- |
|
|
|
This Repo contains a diffusers format version of the PixArt-Sigma Repos |
|
PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers |
|
PixArt-alpha/PixArt-Sigma-XL-2-2K-MS |
|
with the models loaded and saved in fp16 and bf16 formats, roughly halfing their sizes. |
|
It can be used where download bandwith, memory or diskspace are relatively low, a T4 Colab instance for example. |
|
|
|
To use in a diffusers script you currently(15/04/2024) need to use a Source distribution of Diffusers |
|
and an extra 'patch' from the PixArt-Alpha's teams Sigma Github repo |
|
|
|
**NOTE: This Model has been converted but not successfully tested, during the memory effecient attention |
|
it generates 16Gb buffer, this appears break an MPS limitation, but it may also mean if requires more than 16Gb even |
|
with the 16 bit model** |
|
|
|
The diffusers script below assumes those with more memory on none MPS GPU's have more luck running it! |
|
|
|
a Diffusers script looks like this, **currently (25th April 2024) you need will to install diffusers from source**. |
|
|
|
|
|
```py |
|
import random |
|
import sys |
|
import torch |
|
from diffusers from PixArtSigmaPipeline |
|
|
|
device = 'mps' |
|
weight_dtype = torch.bfloat16 |
|
|
|
pipe = PixArtSigmaPipeline.from_pretrained( |
|
"Vargol/PixArt-Sigma_2k_16bit", |
|
torch_dtype=weight_dtype, |
|
variant="fp16", |
|
use_safetensors=True, |
|
) |
|
|
|
# Enable memory optimizations. |
|
# pipe.enable_model_cpu_offload() |
|
pipe.to(device) |
|
|
|
prompt = "Cinematic science fiction film still.A cybernetic demon awaits her friend in a bar selling flaming oil drinks. The barman is a huge tree being, towering over the demon" |
|
|
|
for i in range(4): |
|
|
|
seed = random.randint(0, sys.maxsize) |
|
generator = torch.Generator("mps").manual_seed(seed); |
|
|
|
image = pipe(prompt, generator=generator, num_iferencenum_inference_steps=40).images[0] |
|
image.save(f"pas_{seed}.png")a |
|
~~ |
|
|
|
``` |
|
|
|
|