Spaces:
Runtime error
Runtime error
patrickvonplaten
commited on
Commit
•
914f06c
1
Parent(s):
6d4e0ea
Update app.py
Browse files
app.py
CHANGED
@@ -79,15 +79,22 @@ def get_dataframe_all():
|
|
79 |
|
80 |
TITLE = "# Open Parti Prompts Leaderboard"
|
81 |
DESCRIPTION = """
|
82 |
-
*
|
|
|
|
|
|
|
|
|
|
|
83 |
"""
|
84 |
|
85 |
EXPLANATION = """\n\n
|
86 |
## How the is data collected 📊 \n\n
|
87 |
|
88 |
-
In the [
|
89 |
-
|
90 |
-
|
|
|
|
|
91 |
|
92 |
Currently the leaderboard includes the following models:
|
93 |
- [sd-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
|
@@ -95,7 +102,8 @@ Currently the leaderboard includes the following models:
|
|
95 |
- [if-v1-0](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0)
|
96 |
- [karlo](https://huggingface.co/kakaobrain/karlo-v1-alpha) \n\n
|
97 |
|
98 |
-
In the following you can see three result tables. The first shows
|
|
|
99 |
show you a breakdown analysis per category and per type of challenge as defined by [Parti Prompts](https://huggingface.co/datasets/nateraw/parti-prompts).
|
100 |
"""
|
101 |
|
|
|
79 |
|
80 |
TITLE = "# Open Parti Prompts Leaderboard"
|
81 |
DESCRIPTION = """
|
82 |
+
The *Open Parti Prompts Leaderboard* compares state-of-the-art, open-source text-to-image models to each other according to **human preferences**. \n\n
|
83 |
+
Text-to-image models are notoriously difficult to evaluate. [FID](https://en.wikipedia.org/wiki/Fr%C3%A9chet_inception_distance) and
|
84 |
+
[CLIP Score](https://en.wikipedia.org/wiki/Fr%C3%A9chet_inception_distance) are not enough to accurately state whether a text-to-image model can
|
85 |
+
**generate "good" images**. "Good" is extremely difficult to put into numbers. \n\n
|
86 |
+
Instead, the **Open Parti Prompts Leaderboard** uses human feedback from the community to compare images from different text-to-image models to each other.
|
87 |
+
|
88 |
"""
|
89 |
|
90 |
EXPLANATION = """\n\n
|
91 |
## How the is data collected 📊 \n\n
|
92 |
|
93 |
+
In more detail, the [Open Parti Prompts Game](https://huggingface.co/spaces/OpenGenAI/open-parti-prompts) collects human preferences that state which generated image
|
94 |
+
best fits a given prompt from the [Parti Prompts](https://huggingface.co/datasets/nateraw/parti-prompts) dataset. Parti Prompts has been designed to challenge
|
95 |
+
text-to-image models on prompts of varying categories and difficulty. The images have been pre-generated from the models that are compared in this space.
|
96 |
+
For more information of how the images were created, plesae refer to [Open Parti Prompts](https://huggingface.co/spaces/OpenGenAI/open-parti-prompts).
|
97 |
+
The community's answers are then stored and used in this space to give a human-preference-based comparison of the different models. \n\n
|
98 |
|
99 |
Currently the leaderboard includes the following models:
|
100 |
- [sd-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
|
|
|
102 |
- [if-v1-0](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0)
|
103 |
- [karlo](https://huggingface.co/kakaobrain/karlo-v1-alpha) \n\n
|
104 |
|
105 |
+
In the following you can see three result tables. The first shows the overall comparison of the 4 models. The score states,
|
106 |
+
**the percentage at which images generated from the corresponding model are preferred over the image from all other models**. The second and third tables
|
107 |
show you a breakdown analysis per category and per type of challenge as defined by [Parti Prompts](https://huggingface.co/datasets/nateraw/parti-prompts).
|
108 |
"""
|
109 |
|