File size: 1,668 Bytes
92a6a4e
 
 
352e04d
 
 
 
 
 
 
16b9307
5a74fa6
16b9307
352e04d
16b9307
 
 
352e04d
 
 
 
 
 
16b9307
352e04d
 
 
 
 
d895e7e
352e04d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: openrail++
---

This Repo contains a diffusers format version of the PixArt-Sigma Repos 
PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers
PixArt-alpha/PixArt-Sigma-XL-2-2K-MS
with the models loaded and saved in fp16 and bf16 formats, roughly halfing their sizes.
It can be used where download bandwith, memory or diskspace are relatively low, a T4 Colab instance for example.

**NOTE: This Model has been converted but not successfully tested, during the memory effecient attention
it generates 16Gb buffer, this appears break an MPS limitation, but it may also mean if requires more than 16Gb even
with the 16 bit model**

The diffusers script below assumes those with more memory on none MPS GPU's have more luck running it!

a Diffusers script looks like this, **currently (25th April 2024) you need will to install diffusers from source**.


```py
import random
import sys
import torch
from diffusers from PixArtSigmaPipeline

device = 'mps'
weight_dtype = torch.bfloat16

pipe = PixArtSigmaPipeline.from_pretrained(
    "Vargol/PixArt-Sigma_2k_16bit",
    torch_dtype=weight_dtype,
    variant="fp16",
    use_safetensors=True,
)

# Enable memory optimizations.
# pipe.enable_model_cpu_offload()
pipe.to(device)

prompt = "Cinematic science fiction film still.A cybernetic demon awaits her friend in a bar selling flaming oil drinks.  The barman is a huge tree being, towering over the demon"

for i in range(4):

    seed = random.randint(0, sys.maxsize)
    generator =  torch.Generator("mps").manual_seed(seed);

    image = pipe(prompt, generator=generator, num_iferencenum_inference_steps=40).images[0]
    image.save(f"pas_{seed}.png")a
~~

```