Papers
arxiv:2406.16855

DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation

Published on Jun 24
· Submitted by yuangpeng on Jun 25
#1 Paper of the day
Authors:
,
,
,

Abstract

Personalized image generation holds great promise in assisting humans in everyday work and life due to its impressive function in creatively generating personalized content. However, current evaluations either are automated but misalign with humans or require human evaluations that are time-consuming and expensive. In this work, we present DreamBench++, a human-aligned benchmark automated by advanced multimodal GPT models. Specifically, we systematically design the prompts to let GPT be both human-aligned and self-aligned, empowered with task reinforcement. Further, we construct a comprehensive dataset comprising diverse images and prompts. By benchmarking 7 modern generative models, we demonstrate that DreamBench++ results in significantly more human-aligned evaluation, helping boost the community with innovative findings.

Community

Paper author Paper submitter

DreamBench++ builds a fair benchmark for personalized image generation.

  • We collected 150 diverse images and 1350 prompts containing simple, stylized, and imaginative content.

  • We use multimodal large language models (e.g., GPT4o) to construct automated evaluation metrics aligned with human preferences.

aa00108e9bae81b4fdcad61e3f3a149b.png

There's a simple summary of this paper up here - feedback is welcome! https://www.aimodels.fyi/papers/arxiv/dreambench-human-aligned-benchmark-personalized-image-generation

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.16855 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.16855 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.16855 in a Space README.md to link it from this page.

Collections including this paper 9