Paella is a novel text-to-image model that uses a compressed quantized latent space, based on a f8 VQGAN, and a masked training objective to achieve fast generation in ~10 inference steps.
The models in these repo refer to the "Arroz con Cosas" variation of Paella that provide:
- A clip2img model, to turn CLIP image embeddings into images.
- A prior to transform CLIP text embeddings into CLIP image embeddings.
- A custom VQ-GAN trained on 7K watercolor images.
- A Paella generator model.
Resources
Biases and content acknowledgment
Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on 600 million images from the improved LAION-5B aesthetic dataset, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes.