Tao Hu

taohu

AI & ML interests

None yet

Recent Activity

updated a model about 2 months ago
taohu/zigma
New activity about 2 months ago
taohu/zigma:Add link to paper
View all activity

Organizations

None yet

taohu's activity

New activity in taohu/zigma about 2 months ago

Add link to paper

#1 opened about 2 months ago by nielsr
updated a dataset 2 months ago
updated a model 2 months ago
updated a dataset 2 months ago
upvoted an article 6 months ago
Reacted to akhaliq's post with 🔥 8 months ago
view post
Post
2219
Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation

Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation (2403.12015)

Diffusion models are the main driver of progress in image and video synthesis, but suffer from slow inference speed. Distillation methods, like the recently introduced adversarial diffusion distillation (ADD) aim to shift the model from many-shot to single-step inference, albeit at the cost of expensive and difficult optimization due to its reliance on a fixed pretrained DINOv2 discriminator. We introduce Latent Adversarial Diffusion Distillation (LADD), a novel distillation approach overcoming the limitations of ADD. In contrast to pixel-based ADD, LADD utilizes generative features from pretrained latent diffusion models. This approach simplifies training and enhances performance, enabling high-resolution multi-aspect ratio image synthesis. We apply LADD to Stable Diffusion 3 (8B) to obtain SD3-Turbo, a fast model that matches the performance of state-of-the-art text-to-image generators using only four unguided sampling steps. Moreover, we systematically investigate its scaling behavior and demonstrate LADD's effectiveness in various applications such as image editing and inpainting.