--- license: mit task_categories: - text-to-image - image-to-image language: - en size_categories: - n<1K --- # Multimodal Concept Conjunction 250 In our paper [MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation](https://arxiv.org/abs/2305.15296) we propose the MCC-250 benchmark to evaluate generative image composition capablities for multimodal inputs. MCC-250 is built on a subset of [CC-500](https://arxiv.org/abs/2212.05032) which contains 500 text-only prompts of the pattern "a red apple and a yellow banana", textually describing two objects with respective attributes. With MCC-250, we provide a set of reference images for each object and attribute combination, enabling multimodal applications. ## Attribution All images where source from these four stock imagery providers: - [Pixabay](https://pixabay.com/) - [Unsplash](https://unsplash.com/) - [Pexels](https://www.pexels.com/) - [Freepik](https://www.freepik.com/)