Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening
Abstract
We propose Diffusion-Sharpening, a fine-tuning approach that enhances downstream alignment by optimizing sampling trajectories. Existing RL-based fine-tuning methods focus on single training timesteps and neglect trajectory-level alignment, while recent sampling trajectory optimization methods incur significant inference NFE costs. Diffusion-Sharpening overcomes this by using a path integral framework to select optimal trajectories during training, leveraging reward feedback, and amortizing inference costs. Our method demonstrates superior training efficiency with faster convergence, and best inference efficiency without requiring additional NFEs. Extensive experiments show that Diffusion-Sharpening outperforms RL-based fine-tuning methods (e.g., Diffusion-DPO) and sampling trajectory optimization methods (e.g., Inference Scaling) across diverse metrics including text alignment, compositional capabilities, and human preferences, offering a scalable and efficient solution for future diffusion model fine-tuning. Code: https://github.com/Gen-Verse/Diffusion-Sharpening
Community
Great work! Are you planning to release the weights for the models trained for the paper?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Refining Alignment Framework for Diffusion Models with Intermediate-Step Preference Ranking (2025)
- Score as Action: Fine-Tuning Diffusion Generative Models by Continuous-time Reinforcement Learning (2025)
- Harness Local Rewards for Global Benefits: Effective Text-to-Video Generation Alignment with Patch-level Reward Models (2025)
- A General Framework for Inference-time Scaling and Steering of Diffusion Models (2025)
- Iterative Importance Fine-tuning of Diffusion Models (2025)
- Personalized Preference Fine-tuning of Diffusion Models (2025)
- Weak Supervision Dynamic KL-Weighted Diffusion Models Guided by Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper