File size: 2,358 Bytes
2f9e2c6 06bc59c 2f9e2c6 06bc59c d2ec434 06bc59c cabde32 27d2bf6 cabde32 e4713f6 cabde32 e4713f6 00e8992 dce36cc cabde32 06bc59c af9abca 6be2da5 e4713f6 00e8992 e4713f6 00e8992 f6851b9 51d76d4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: apache-2.0
language:
- en
---
## StripedHyena-Hessian-7B (SH 7B)
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/Bfjh77emDsWOY-VmfvU9C.png" width="60%" />
</p>
### About
One of the focus areas at Together Research is new architectures for long context, improved training, and inference performance over the Transformer architecture. Spinning out of a research program from our team and academic collaborators, with roots in **signal processing-inspired sequence models**, we are excited to introduce the **StripedHyena** models. StripedHyena is the **first alternative model competitive with the best open-source Transformers** of similar sizes in short and long-context evaluations.
**StripedHyena-Hessian-7B (SH 7B)** is our **base model** for this release.
- Read more here in [our blog](https://www.together.ai/blog/stripedhyena-7b).
- Play with the model on our [playground](https://api.together.xyz/playground/language/togethercomputer/StripedHyena-Hessian-7B)!
- Dive into the details of our [standalone implementation](https://github.com/togethercomputer/stripedhyena), and our related research: [1](https://arxiv.org/abs/2302.10866), [2](https://arxiv.org/abs/2310.18780), [3](https://arxiv.org/abs/2311.05908).
### Model Architecture
StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers.
- Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters.
- Low latency, faster decoding and higher throughput than Transformers.
- Improvement to training and inference-optimal scaling laws, compared to optimized Transformer architectures such as Llama-2.
- Trained on sequences of up to 32k, allowing it to process longer prompts.
### Note
To use StripedHyena outside of the playground, you will need to install custom kernels. Please follow the instructions from the [standalone repository](https://github.com/togethercomputer/stripedhyena).
StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training. |