Stuffed Mamba: State Collapse and State Capacity of RNN-Based Long-Context Modeling
Abstract
One essential advantage of recurrent neural networks (RNNs) over transformer-based language models is their linear computational complexity concerning the sequence length, which makes them much faster in handling long sequences during inference. However, most publicly available RNNs (e.g., Mamba and RWKV) are trained on sequences with less than 10K tokens, and their effectiveness in longer contexts remains largely unsatisfying so far. In this paper, we study the cause of the inability to process long context for RNNs and suggest critical mitigations. We examine two practical concerns when applying state-of-the-art RNNs to long contexts: (1) the inability to extrapolate to inputs longer than the training length and (2) the upper bound of memory capacity. Addressing the first concern, we first investigate *state collapse* (SC), a phenomenon that causes severe performance degradation on sequence lengths not encountered during training. With controlled experiments, we attribute this to overfitting due to the recurrent state being overparameterized for the training length. For the second concern, we train a series of Mamba-2 models on long documents to empirically estimate the recurrent state capacity in language modeling and passkey retrieval. Then, three SC mitigation methods are proposed to improve Mamba-2's length generalizability, allowing the model to process more than 1M tokens without SC. We also find that the recurrent state capacity in passkey retrieval scales exponentially to the state size, and we empirically train a Mamba-2 370M with near-perfect passkey retrieval accuracy on 256K context length. This suggests a promising future for RNN-based long-context modeling.
Community
First RNN with great in-context retrieval capabilities
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Gated Slot Attention for Efficient Linear-Time Sequence Modeling (2024)
- Were RNNs All We Needed? (2024)
- FocusLLM: Scaling LLM's Context by Parallel Decoding (2024)
- Rodimus*: Breaking the Accuracy-Efficiency Trade-Off with Efficient Attentions (2024)
- Untie the Knots: An Efficient Data Augmentation Strategy for Long-Context Pre-Training in Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
This paper was a great read. We wrote a summary blog about this paper and a few more like
Round and Round we go: What makes RoPE work
Stuff Mamba: Addressing State Collapse in Mamba
Amortized planning Transformers: Playing Chess
You can find it here. Please give it a read :)
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper