Datasets:
metadata
license: mit
task_categories:
- text-to-image
- image-to-image
language:
- en
size_categories:
- n<1K
Multimodal Concept Conjunction 250
In our paper MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation we propose the MCC-250 benchmark to evaluate generative image composition capablities for multimodal inputs. MCC-250 is built on a subset of CC-500 which contains 500 text-only prompts of the pattern "a red apple and a yellow banana", textually describing two objects with respective attributes.
With MCC-250, we provide a set of reference images for each object and attribute combination, enabling multimodal applications.
Attribution
All images where source from these four stock imagery providers: