license: other
license_name: playground-v2dot5-community
license_link: >-
https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic/blob/main/LICENSE.md
tags:
- text-to-image
- playground
inference:
parameters:
guidance_scale: 3
Playground v2.5 – 1024px Aesthetic Model
This repository contains a model that generates highly aesthetic images of resolution 1024x1024, as well as portrait and landscape aspect ratios. You can use the model with Hugging Face 🧨 Diffusers.
Playground v2.5 is a diffusion-based text-to-image generative model, and a successor to Playground v2.
Playground v2.5 is the state-of-the-art open-source model in aesthetic quality. Our user studies demonstrate that our model outperforms SDXL, Playground v2, PIXART-α, DALL-E 3, and Midjourney 5.2.
For details on the development and training of our model, please refer to our blog post [link] and technical report [link]
Model Description
- Developed by: Playground
- Model type: Diffusion-based text-to-image generative model
- License: Playground v2.5 Community License
- Summary: This model generates images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pre-trained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L). It follows the same architecture as Stable Diffusion XL.
Using the model with 🧨 Diffusers
Install diffusers >= 0.26.0 and some dependencies:
pip install transformers accelerate safetensors
To run our model, you will need to use our custom pipeline from this gist: https://gist.github.com/aykamko/402e948a8fdbbc9613f9978802d90194
Notes:
- Only the Euler, Heun, and DPM++ 2M Karras schedulers have been tested
- We recommend using
guidance_scale=7.0
for the Euler/Heun, andguidance_scale=5.0
for DPM++ 2M Karras
Then, run the following snippet:
# copy/paste pipeline code here from gist: https://gist.github.com/aykamko/402e948a8fdbbc9613f9978802d90194
pipe = PlaygroundV2dot5Pipeline.from_pretrained(
"playgroundai/playground-v2.5-1024px-aesthetic",
torch_dtype=torch.float16,
use_safetensors=True,
add_watermarker=False,
variant="fp16",
)
pipe.to("cuda")
# # Optional: use DPM++ 2M Karras scheduler for improved quality on small details
# from diffusers import DPMSolverMultistepScheduler
# pipe.scheduler = DPMSolverMultistepScheduler(**common_config, use_karras_sigmas=True)
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt=prompt, guidance_scale=7.0).images[0]
Using the model with Automatic1111/ComfyUI
Support coming soon. We will update this model card with instructions when ready.
User Studies
This model card only provides a brief summary of our user study results. For extensive details on how we perform user studies, please check out our technical report: [link]
We conducted studies to measure overall aesthetic quality, as well as for the specific areas we aimed to improve with Playground v2.5, namely multi aspect ratios and human preference alignment.
The aesthetic quality of Playground v2.5 dramatically outperforms the current state-of-the-art open source models SDXL and PIXART-α, as well as Playground v2. Because the performance differential between Playground V2.5 and SDXL was so large, we also tested our aesthetic quality against world-class closed-source models like DALL-E 3 and Midjourney 5.2, and found that Playground v2.5 outperforms them as well.
Similarly, for multi aspect ratios, we outperform SDXL by a large margin.
Next, we benchmark Playground v2.5 specifically on people-related images, to test Human Preference Alignment. We compared Playground v2.5 against two commonly-used baseline models: SDXL and RealStock v2, a community fine-tune of SDXL that was trained on a realistic people dataset.
Playground v2.5 outperforms both baselines by a large margin.
Lastly, we report metrics using our MJHQ-30K benchmark which we open-sourced with the v2 release. We report both the overall FID and per category FID. All FID metrics are computed at resolution 1024x1024. Our results show that Playground v2.5 outperforms both Playground v2 and SDXL in overall FID and all category FIDs, especially in the people and fashion categories. This is in line with the results of the user study, which indicates a correlation between human preferences and the FID score of the MJHQ-30K benchmark.
How to cite us
TODO: Link to the technical report
@misc{playground-v2.5,
url={[https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic)},
title={Playground v2.5: Three Insights for Achieving State of the Art in Text-to-Image Generation},
author={Li, Daiqing and Kamko, Aleks and Sabet, Ali and Akhgari, Ehsan and Xu, Linmiao and Doshi, Suhail}
}