Papers
arxiv:2308.11974

Blending-NeRF: Text-Driven Localized Editing in Neural Radiance Fields

Published on Aug 23, 2023
Authors:
,
,
,
,

Abstract

Text-driven localized editing of 3D objects is particularly difficult as locally mixing the original 3D object with the intended new object and style effects without distorting the object's form is not a straightforward process. To address this issue, we propose a novel NeRF-based model, Blending-<PRE_TAG>NeRF</POST_TAG>, which consists of two NeRF networks: pretrained <PRE_TAG>NeRF</POST_TAG> and editable <PRE_TAG>NeRF</POST_TAG>. Additionally, we introduce new blending operations that allow Blending-<PRE_TAG>NeRF</POST_TAG> to properly edit target regions which are localized by text. By using a pretrained vision-language aligned model, CLIP, we guide Blending-<PRE_TAG>NeRF</POST_TAG> to add new objects with varying colors and densities, modify textures, and remove parts of the original object. Our extensive experiments demonstrate that Blending-<PRE_TAG>NeRF</POST_TAG> produces naturally and locally edited 3D objects from various text prompts. Our project page is available at https://seokhunchoi.github.io/Blending-<PRE_TAG>NeRF</POST_TAG>/

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2308.11974 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2308.11974 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2308.11974 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.