Papers
arxiv:2306.08276

TryOnDiffusion: A Tale of Two UNets

Published on Jun 14, 2023
· Submitted by akhaliq on Jun 16, 2023
#1 Paper of the day
Authors:
,
,
,
,
,

Abstract

Given two images depicting a person and a garment worn by another person, our goal is to generate a visualization of how the garment might look on the input person. A key challenge is to synthesize a photorealistic detail-preserving visualization of the garment, while warping the garment to accommodate a significant body pose and shape change across the subjects. Previous methods either focus on garment detail preservation without effective pose and shape variation, or allow try-on with the desired shape and pose but lack garment details. In this paper, we propose a diffusion-based architecture that unifies two UNets (referred to as Parallel-UNet), which allows us to preserve garment details and warp the garment for significant pose and body change in a single network. The key ideas behind Parallel-UNet include: 1) garment is warped implicitly via a cross attention mechanism, 2) garment warp and person blend happen as part of a unified process as opposed to a sequence of two separate tasks. Experimental results indicate that TryOnDiffusion achieves state-of-the-art performance both qualitatively and quantitatively.

Community

This comment has been hidden
This comment has been hidden
This comment has been hidden

I am eagerly waiting to try this model out hehe:) Is there any word on when this would be made available to the public!?

Any plans to release this model? Would love to pay google to use this via an API :)

We have one implementation ready, https://github.com/kailashahirwar/tryondiffusion. However, we do not have the weights available but we are working on it.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2306.08276 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2306.08276 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2306.08276 in a Space README.md to link it from this page.

Collections including this paper 5