Move-in-2D: 2D-Conditioned Human Motion Generation
Abstract
Generating realistic human videos remains a challenging task, with the most effective methods currently relying on a human motion sequence as a control signal. Existing approaches often use existing motion extracted from other videos, which restricts applications to specific motion types and global scene matching. We propose Move-in-2D, a novel approach to generate human motion sequences conditioned on a scene image, allowing for diverse motion that adapts to different scenes. Our approach utilizes a diffusion model that accepts both a scene image and text prompt as inputs, producing a motion sequence tailored to the scene. To train this model, we collect a large-scale video dataset featuring single-human activities, annotating each video with the corresponding human motion as the target output. Experiments demonstrate that our method effectively predicts human motion that aligns with the scene image after projection. Furthermore, we show that the generated motion sequence improves human motion quality in video synthesis tasks.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Motion-2-to-3: Leveraging 2D Motion Data to Boost 3D Motion Generation (2024)
- Diffusion Implicit Policy for Unpaired Scene-aware Motion Synthesis (2024)
- Motion Prompting: Controlling Video Generation with Motion Trajectories (2024)
- Fleximo: Towards Flexible Text-to-Human Motion Video Generation (2024)
- One-shot Human Motion Transfer via Occlusion-Robust Flow Prediction and Neural Texturing (2024)
- Motion Control for Enhanced Complex Action Video Generation (2024)
- MotionStone: Decoupled Motion Intensity Modulation with Diffusion Transformer for Image-to-Video Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper