ZipAR: Accelerating Autoregressive Image Generation through Spatial Locality
Abstract
In this paper, we propose ZipAR, a training-free, plug-and-play parallel decoding framework for accelerating auto-regressive (AR) visual generation. The motivation stems from the observation that images exhibit local structures, and spatially distant regions tend to have minimal interdependence. Given a partially decoded set of visual tokens, in addition to the original next-token prediction scheme in the row dimension, the tokens corresponding to spatially adjacent regions in the column dimension can be decoded in parallel, enabling the ``next-set prediction'' paradigm. By decoding multiple tokens simultaneously in a single forward pass, the number of forward passes required to generate an image is significantly reduced, resulting in a substantial improvement in generation efficiency. Experiments demonstrate that ZipAR can reduce the number of model forward passes by up to 91% on the Emu3-Gen model without requiring any additional retraining.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CART: Compositional Auto-Regressive Transformer for Image Generation (2024)
- Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient (2024)
- Continuous Speculative Decoding for Autoregressive Image Generation (2024)
- 3D representation in 512-Byte:Variational tokenizer is the key for autoregressive 3D generation (2024)
- RandAR: Decoder-only Autoregressive Visual Generation in Random Orders (2024)
- ENAT: Rethinking Spatial-temporal Interactions in Token-based Image Synthesis (2024)
- FoPru: Focal Pruning for Efficient Large Vision-Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper