aykamko commited on
Commit
244d598
1 Parent(s): 1541b6a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -29,9 +29,9 @@ You can use the model with Hugging Face 🧨 Diffusers.
29
 
30
  **Playground v2** is a diffusion-based text-to-image generative model. The model was trained from scratch by the research team at [Playground](https://playground.com).
31
 
32
- Playground v2’s images are favored 2.5 times more than those produced by Stable Diffusion XL, according to Playground’s [user study](#user-study).
33
 
34
- We are thrilled to release all intermediate checkpoints at different training stages, including evaluation metrics, to the community. We hope this will foster more foundation model research in pixels.
35
 
36
  Lastly, we introduce a new benchmark, [MJHQ-30K](#mjhq-30k-benchmark), for automatic evaluation of a model’s aesthetic quality.
37
 
@@ -56,7 +56,7 @@ from diffusers import DiffusionPipeline
56
  import torch
57
 
58
  pipe = DiffusionPipeline.from_pretrained(
59
- "playgroundai/playground-v256px-base",
60
  torch_dtype=torch.float16,
61
  use_safetensors=True,
62
  add_watermarker=False,
@@ -72,7 +72,7 @@ image = pipe(prompt=prompt).images[0]
72
 
73
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63855d851769b7c4b10e1f76/8VzBkSYaUU3dt509Co9sk.png)
74
 
75
- According to user studies conducted by Playground, involving over 2,600 prompts and thousands of users, the images generated by Playground v2 are favored 2.5 times more than those produced by [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).
76
 
77
  We report user preference metrics on [PartiPrompts](https://github.com/google-research/parti), following standard practice, and on an internal prompt dataset curated by the Playground team. The “Internal 1K” prompt dataset is diverse and covers various categories and tasks.
78
 
@@ -91,11 +91,11 @@ We introduce a new benchmark, [MJHQ-30K](https://huggingface.co/datasets/playgro
91
 
92
  We curate the high-quality dataset from Midjourney with 10 common categories, each category with 3K samples. Following common practice, we use aesthetic score and CLIP score to ensure high image quality and high image-text alignment. Furthermore, we take extra care to make the data diverse within each category.
93
 
94
- For Playground v2, we report both the overall FID and per-category FID. (All FID metrics are computed at resolution 1024x1024.)
95
 
96
  We release this benchmark to the public and encourage the community to adopt it for benchmarking their models’ aesthetic quality.
97
 
98
- ### Base Models for all resolution
99
 
100
  | Model | FID | Clip Score |
101
  | ---------------------------- | ------ | ---------- |
 
29
 
30
  **Playground v2** is a diffusion-based text-to-image generative model. The model was trained from scratch by the research team at [Playground](https://playground.com).
31
 
32
+ Playground v2’s images are favored **2.5** times more than those produced by Stable Diffusion XL, according to Playground’s [user study](#user-study).
33
 
34
+ We are thrilled to release [intermediate checkpoints](#intermediate-base-models) at different training stages, including evaluation metrics, to the community. We hope this will foster more foundation model research in pixels.
35
 
36
  Lastly, we introduce a new benchmark, [MJHQ-30K](#mjhq-30k-benchmark), for automatic evaluation of a model’s aesthetic quality.
37
 
 
56
  import torch
57
 
58
  pipe = DiffusionPipeline.from_pretrained(
59
+ "playgroundai/playground-v2-1024px-aesthetic",
60
  torch_dtype=torch.float16,
61
  use_safetensors=True,
62
  add_watermarker=False,
 
72
 
73
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63855d851769b7c4b10e1f76/8VzBkSYaUU3dt509Co9sk.png)
74
 
75
+ According to user studies conducted by Playground, involving over 2,600 prompts and thousands of users, the images generated by Playground v2 are favored **2.5** times more than those produced by [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).
76
 
77
  We report user preference metrics on [PartiPrompts](https://github.com/google-research/parti), following standard practice, and on an internal prompt dataset curated by the Playground team. The “Internal 1K” prompt dataset is diverse and covers various categories and tasks.
78
 
 
91
 
92
  We curate the high-quality dataset from Midjourney with 10 common categories, each category with 3K samples. Following common practice, we use aesthetic score and CLIP score to ensure high image quality and high image-text alignment. Furthermore, we take extra care to make the data diverse within each category.
93
 
94
+ For Playground v2, we report both the overall FID and per-category FID. All FID metrics are computed at resolution 1024x1024. Our benchmark results show that our model outperforms SDXL-1-0-refiner in overall FID and all category FIDs, especially in people and fashion categories. This is in line with the results of the user study, which indicates a correlation between human preference and FID score on the MJHQ30K benchmark.
95
 
96
  We release this benchmark to the public and encourage the community to adopt it for benchmarking their models’ aesthetic quality.
97
 
98
+ ### Intermediate Base Models
99
 
100
  | Model | FID | Clip Score |
101
  | ---------------------------- | ------ | ---------- |