Add link to paper
#1
by
nielsr
HF staff
- opened
README.md
CHANGED
@@ -1,8 +1,13 @@
|
|
1 |
---
|
|
|
2 |
language:
|
3 |
- en
|
4 |
---
|
5 |
|
6 |
This is a pure sub-quadtratic linear attention 8B parameter model, linearized from the Meta Llama 3.1 8B model.
|
7 |
|
8 |
-
Details on this model and how to train your own are provided at: https://github.com/HazyResearch/lolcats/tree/lolcats-scaled
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
pipeline_tag: text-generation
|
3 |
language:
|
4 |
- en
|
5 |
---
|
6 |
|
7 |
This is a pure sub-quadtratic linear attention 8B parameter model, linearized from the Meta Llama 3.1 8B model.
|
8 |
|
9 |
+
Details on this model and how to train your own are provided at: https://github.com/HazyResearch/lolcats/tree/lolcats-scaled
|
10 |
+
|
11 |
+
## Paper
|
12 |
+
|
13 |
+
See the paper page: https://huggingface.co/papers/2410.10254
|