A newer version of this model is available:
black-forest-labs/FLUX.1-dev
TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps
📃 Paper •
we propose an innovative two-stage data-free consistency distillation (TDCD) approach to accelerate latent consistency model. The first stage improves consistency constraint by data-free sub-segment consistency distillation (DSCD). The second stage enforces the global consistency across inter-segments through data-free consistency distillation (DCD). Besides, we explore various techniques to promote TLCM’s performance in data-free manner, forming Training-efficient Latent Consistency Model (TLCM) with 2-8 step inference.
TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared to full-step approaches.
This is for Flux-base LoRA.
- Downloads last month
- 23
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support text-to-image models for adapter-transformers library.
Model tree for OPPOer/TLCMFlux
Base model
black-forest-labs/FLUX.1-dev