Papers
arxiv:2407.00788

InstantStyle-Plus: Style Transfer with Content-Preserving in Text-to-Image Generation

Published on Jun 30
· Submitted by wanghaofan on Jul 2
Authors:
,
Hao Ai ,
,

Abstract

Style transfer is an inventive process designed to create an image that maintains the essence of the original while embracing the visual style of another. Although diffusion models have demonstrated impressive generative power in personalized subject-driven or style-driven applications, existing state-of-the-art methods still encounter difficulties in achieving a seamless balance between content preservation and style enhancement. For example, amplifying the style's influence can often undermine the structural integrity of the content. To address these challenges, we deconstruct the style transfer task into three core elements: 1) Style, focusing on the image's aesthetic characteristics; 2) Spatial Structure, concerning the geometric arrangement and composition of visual elements; and 3) Semantic Content, which captures the conceptual meaning of the image. Guided by these principles, we introduce InstantStyle-Plus, an approach that prioritizes the integrity of the original content while seamlessly integrating the target style. Specifically, our method accomplishes style injection through an efficient, lightweight process, utilizing the cutting-edge InstantStyle framework. To reinforce the content preservation, we initiate the process with an inverted content latent noise and a versatile plug-and-play tile ControlNet for preserving the original image's intrinsic layout. We also incorporate a global semantic adapter to enhance the semantic content's fidelity. To safeguard against the dilution of style information, a style extractor is employed as discriminator for providing supplementary style guidance. Codes will be available at https://github.com/instantX-research/InstantStyle-Plus.

Community

Paper author Paper submitter

InstantStyle-Plus is an approach that prioritizes the integrity of the original content while seamlessly integrating the target style.
exp1.png

exciting! congrats!

Hi @wanghaofan congrats on this work! Are you planning on sharing any artifacts (models, datasets, demos) on the hub?

See here for more info: https://huggingface.co/docs/hub/models-uploading.

You can also link your models/datasets to this paper, see here: https://huggingface.co/docs/hub/en/model-cards#linking-a-paper

·
Paper author

We may release the code later.

No description provided.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.00788 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.00788 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.00788 in a Space README.md to link it from this page.

Collections including this paper 2