This is a pure sub-quadtratic linear attention 70B parameter model, linearized from the Meta Llama 3.1 70B model starting point.

Details on this model and how to train your own are provided at: https://github.com/HazyResearch/lolcats/tree/lolcats-scaled

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including hazyresearch/lolcats-llama-3.1-70b