ControlLight / README.md
Nahrawy's picture
Update README.md
cc2f177
|
raw
history blame
1.52 kB
---
title: ControlLight
emoji: πŸ“Š
colorFrom: red
colorTo: indigo
sdk: gradio
sdk_version: 3.28.2
app_file: app.py
pinned: false
license: cc-by-4.0
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- jax-diffusers-event
---
# ControlLight: Light control through ControlNet and Depth Maps conditioning
We propose a ControlNet using depth maps conditioning that is capable of controlling the light direction in a scene while trying to maintain the scene integrity.
The model was trained on [VIDIT dataset](https://huggingface.co/datasets/Nahrawy/VIDIT-Depth-ControlNet) and [
A Dataset of Flash and Ambient Illumination Pairs from the Crowd](https://huggingface.co/datasets/Nahrawy/FAID-Depth-ControlNet) as a part of the [Jax Diffusers Event](https://huggingface.co/jax-diffusers-event).
Due to the limited available data the model is clearly overfit, but it serves as a proof of concept to what can be further achieved using enough data.
A large part of the training data is synthetic so we encourage further training using synthetically generated scenes, using Unreal engine for example.
The WandB training logs can be found [here](https://wandb.ai/hassanelnahrawy/controlnet-VIDIT-FAID), it's worth noting that the model was left to overfit for experimentation and it's advised to use the 8K steps weights or prior weights.
This project is a joint work between [ParityError](https://huggingface.co/ParityError) and [Nahrawy](https://huggingface.co/Nahrawy).