Edit model card

Aligned Diffusion Model via DPO

Diffusion Model Aligned with thef following reward model and DPO algorithm

close-sourced vlm: claude3-opus  gemini-1.5  gpt-4o  gpt-4v
open-sourced vlm: internvl-1.5
score model: hps-2.1

How to Use

You can load the model and perform inference as follows:

from diffusers import StableDiffusionPipeline, UNet2DConditionModel

pretrained_model_name = "runwayml/stable-diffusion-v1-5"

dpo_unet = UNet2DConditionModel.from_pretrained(
        "path/to/checkpoint",
        subfolder='unet',
        torch_dtype=torch.float16
    ).to('cuda')

pipeline = StableDiffusionPipeline.from_pretrained(pretrained_model_name, torch_dtype=torch.float16)
pipeline = pipeline.to('cuda')
pipeline.safety_checker = None
pipeline.unet = dpo_unet

generator = torch.Generator(device='cuda')
generator = generator.manual_seed(1)

prompt = "a pink flower"

image = pipeline(prompt=prompt, generator=generator, guidance_scale=gs).images[0]

Citation

@misc{chen2024mjbenchmultimodalrewardmodel,
      title={MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?}, 
      author={Zhaorun Chen and Yichao Du and Zichen Wen and Yiyang Zhou and Chenhang Cui and Zhenzhen Weng and Haoqin Tu and Chaoqi Wang and Zhengwei Tong and Qinglan Huang and Canyu Chen and Qinghao Ye and Zhihong Zhu and Yuqing Zhang and Jiawei Zhou and Zhuokai Zhao and Rafael Rafailov and Chelsea Finn and Huaxiu Yao},
      year={2024},
      eprint={2407.04842},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.04842}, 
}
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.