SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration
Abstract
The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When handling large sequence lengths, attention becomes the primary time-consuming component. Although quantization has proven to be an effective method for accelerating model inference, existing quantization methods primarily focus on optimizing the linear layer. In response, we first analyze the feasibility of quantization in attention detailedly. Following that, we propose SageAttention, a highly efficient and accurate quantization method for attention. The OPS (operations per second) of our approach outperforms FlashAttention2 and xformers by about 2.1 times and 2.7 times, respectively. SageAttention also achieves superior accuracy performance over FlashAttention3. Comprehensive experiments confirm that our approach incurs almost no end-to-end metrics loss across diverse models, including those for large language processing, image generation, and video generation.
Community
Two points regarding speed need to be emphasized:
Based on the NVIDIA whitepaper[1], the FLOPS for FP8 Matmul is 330 TFLOPS, whereas it reaches 660 TFLOPS for INT8.
Also, as detailed in [1], utilizing a FP16 accumulator for FP16 Matmul achieves 330 TFLOPS, doubling the speed of using a FP32 accumulator.
[1] https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf
I think you made a typo on page 3 in "3.1 FLASHATTENTION". Specifically
"proposes to tile Q, K, and V from the token dimension into blocks {Qi}, {Ki}, {Vi} with block size of b_q, b_kv, b_kv, respectively." Isn't it b_q, b_k, b_v instead of b_q, b_kv, b_kv?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MaskMamba: A Hybrid Mamba-Transformer Model for Masked Image Generation (2024)
- ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models (2024)
- P4Q: Learning to Prompt for Quantization in Visual-language Models (2024)
- Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference (2024)
- VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper