LumiNet: Latent Intrinsics Meets Diffusion Models for Indoor Scene Relighting
Abstract
We introduce LumiNet, a novel architecture that leverages generative models and latent intrinsic representations for effective lighting transfer. Given a source image and a target lighting image, LumiNet synthesizes a relit version of the source scene that captures the target's lighting. Our approach makes two key contributions: a data curation strategy from the StyleGAN-based relighting model for our training, and a modified diffusion-based ControlNet that processes both latent intrinsic properties from the source image and latent extrinsic properties from the target image. We further improve lighting transfer through a learned adaptor (MLP) that injects the target's latent extrinsic properties via cross-attention and fine-tuning. Unlike traditional ControlNet, which generates images with conditional maps from a single scene, LumiNet processes latent representations from two different images - preserving geometry and albedo from the source while transferring lighting characteristics from the target. Experiments demonstrate that our method successfully transfers complex lighting phenomena including specular highlights and indirect illumination across scenes with varying spatial layouts and materials, outperforming existing approaches on challenging indoor scenes using only images as input.
Community
LumiNet lets us relight complex indoor images by transferring lighting from one image to another!
Most importantly, it is completely data-driven - no labels, no marked-up data, not even depth or normals. No physical model either! Just two images in, one relit image out!
Check out how it handles complex lighting - inter-reflections on TV screens, cast shadows, soft shadows - everything! The model seems to "know" where light sources are and how light bounces off different surfaces. This can't just be mere corraltions and pattern matching - it's doing some serious spatial reasoning!
Paper: https://arxiv.org/abs/2412.00177
Project page: https://luminet-relight.github.io/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DifFRelight: Diffusion-Based Facial Performance Relighting (2024)
- RelitLRM: Generative Relightable Radiance for Large Reconstruction Models (2024)
- Generative Portrait Shadow Removal (2024)
- ZeroComp: Zero-shot Object Compositing from Image Intrinsics via Diffusion (2024)
- MVLight: Relightable Text-to-3D Generation via Light-conditioned Multi-View Diffusion (2024)
- MLI-NeRF: Multi-Light Intrinsic-Aware Neural Radiance Fields (2024)
- ARM: Appearance Reconstruction Model for Relightable 3D Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper