LeviTor: 3D Trajectory Oriented Image-to-Video Synthesis
Abstract
The intuitive nature of drag-based interaction has led to its growing adoption for controlling object trajectories in image-to-video synthesis. Still, existing methods that perform dragging in the 2D space usually face ambiguity when handling out-of-plane movements. In this work, we augment the interaction with a new dimension, i.e., the depth dimension, such that users are allowed to assign a relative depth for each point on the trajectory. That way, our new interaction paradigm not only inherits the convenience from 2D dragging, but facilitates trajectory control in the 3D space, broadening the scope of creativity. We propose a pioneering method for 3D trajectory control in image-to-video synthesis by abstracting object masks into a few cluster points. These points, accompanied by the depth information and the instance information, are finally fed into a video diffusion model as the control signal. Extensive experiments validate the effectiveness of our approach, dubbed LeviTor, in precisely manipulating the object movements when producing photo-realistic videos from static images. Project page: https://ppetrichor.github.io/levitor.github.io/
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ObjCtrl-2.5D: Training-free Object Control with Camera Poses (2024)
- OmniDrag: Enabling Motion Control for Omnidirectional Image-to-Video Generation (2024)
- InTraGen: Trajectory-controlled Video Generation for Object Interactions (2024)
- 3DTrajMaster: Mastering 3D Trajectory for Multi-Entity Motion in Video Generation (2024)
- AnchorCrafter: Animate CyberAnchors Saling Your Products via Human-Object Interacting Video Generation (2024)
- TIV-Diffusion: Towards Object-Centric Movement for Text-driven Image to Video Generation (2024)
- I2VControl: Disentangled and Unified Video Motion Synthesis Control (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper