xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Abstract
We present xGen-VideoSyn-1, a text-to-video (T2V) generation model capable of producing realistic scenes from textual descriptions. Building on recent advancements, such as OpenAI's Sora, we explore the latent diffusion model (LDM) architecture and introduce a video variational autoencoder (VidVAE). VidVAE compresses video data both spatially and temporally, significantly reducing the length of visual tokens and the computational demands associated with generating long-sequence videos. To further address the computational costs, we propose a divide-and-merge strategy that maintains temporal consistency across video segments. Our Diffusion Transformer (DiT) model incorporates spatial and temporal self-attention layers, enabling robust generalization across different timeframes and aspect ratios. We have devised a data processing pipeline from the very beginning and collected over 13M high-quality video-text pairs. The pipeline includes multiple steps such as clipping, text detection, motion estimation, aesthetics scoring, and dense captioning based on our in-house video-LLM model. Training the VidVAE and DiT models required approximately 40 and 642 H100 days, respectively. Our model supports over 14-second 720p video generation in an end-to-end way and demonstrates competitive performance against state-of-the-art T2V models.
Community
Access forbidden from this IP address
The Salesforce AI Research organization has an IP allow list enabled and x.y.z.w is not permitted to access this resource. Please contact an owner of the organization for details on allowed IP addresses for the account.
The code and checkpoint are currently under internal review and will be released to the public once this process is complete. This may take an additional 1-2 weeks. Thank you for your patience!
when I try to click on the github link ^^^
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer (2024)
- OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation (2024)
- VidGen-1M: A Large-Scale Dataset for Text-to-video Generation (2024)
- VEnhancer: Generative Space-Time Enhancement for Video Generation (2024)
- Tora: Trajectory-oriented Diffusion Transformer for Video Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper