Unlimiformer: Long-Range Transformers with Unlimited Length Input
Abstract
Transformer-based models typically have a predefined bound to their input length, because of their need to potentially attend to every token in the input. In this work, we propose Unlimiformer: a general approach that can wrap any existing pretrained encoder-decoder transformer, and offload the attention computation across all layers to a single k-nearest-neighbor index; this index can be kept on either the GPU or CPU memory and queried in sub-linear time. This way, we can index extremely long input sequences, while every attention head in every decoder layer retrieves its top-k keys, instead of attending to every key. We demonstrate Unlimiformers's efficacy on several long-document and multi-document summarization benchmarks, showing that it can summarize even 350k token-long inputs from the BookSum dataset, without any input truncation at test time. Unlimiformer improves pretrained models such as BART and Longformer by extending them to unlimited inputs without additional learned weights and without modifying their code. We make our code and models publicly available at https://github.com/abertsch72/unlimiformer .
Community
Last sentence in Apoendix G:
Borgeaud et al. (2022) incorporate retrieval from the external datastore into the architecture, which requires pertaining the model from scratch; ....
is not quite correct - RETRO (arxiv:2112.04426) mentions the possibility of RETROfiting already pretrained models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention (2024)
- IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (2024)
- Linearizing Large Language Models (2024)
- XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference (2024)
- LLoCO: Learning Long Contexts Offline (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Unlimiformer: Transforming Long-Range Input Handling in Transformers
Links ๐:
๐ Subscribe: https://www.youtube.com/@Arxflix
๐ Twitter: https://x.com/arxflix
๐ LMNT (Partner): https://lmnt.com/
Models citing this paper 11
Browse 11 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper