TDM: Learning Few-Step Diffusion Models by Trajectory Distribution Matching

This is the Official Repository of "Learning Few-Step Diffusion Models by Trajectory Distribution Matching", by Yihong Luo, Tianyang Hu, Jiacheng Sun, Yujun Cai, Jing Tang.

User Study Time!

user_study Which one do you think is better? Some images are generated by Pixart-α (50 NFE). Some images are generated by TDM (4 NFE), distilling from Pixart-α in a data-free way with merely 500 training iterations and 2 A800 hours.

Click for answer

Answers of TDM's position (left to right): bottom, bottom, top, bottom, top.

Fast Text-to-Video Geneartion

Our proposed TDM can be easily extended to text-to-video.

Teacher Student

The video on the above was generated by CogVideoX-2B (100 NFE). In the same amount of time, TDM (4NFE) can generate 25 videos, as shown below, achieving an impressive 25 times speedup without performance degradation. (Note: The noise in the GIF is due to compression.)

🔥TODO

  • Pre-trained Models will be released soon.

Contact

Please contact Yihong Luo (yluocg@connect.ust.hk) if you have any questions about this work.

Bibtex

@misc{luo2025tdm,
      title={Learning Few-Step Diffusion Models by Trajectory Distribution Matching}, 
      author={Yihong Luo and Tianyang Hu and Jiacheng Sun and Yujun Cai and Jing Tang},
      year={2025},
      eprint={2503.06674},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.06674}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.