Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,33 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: bsd-2-clause
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- pytorch
|
6 |
+
- text-generation
|
7 |
+
- causal-lm
|
8 |
+
- rwkv
|
9 |
license: bsd-2-clause
|
10 |
+
datasets:
|
11 |
+
- The Pile
|
12 |
+
|
13 |
---
|
14 |
+
|
15 |
+
# RWKV-4 430M
|
16 |
+
|
17 |
+
## Model Description
|
18 |
+
|
19 |
+
RWKV-4 430M is a L24-D1024 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
|
20 |
+
|
21 |
+
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it.
|
22 |
+
|
23 |
+
ctx_len = 1024
|
24 |
+
n_layer = 24
|
25 |
+
n_embd = 1024
|
26 |
+
|
27 |
+
Final checkpoint:
|
28 |
+
RWKV-4-Pile-430M-20220808-8066.pth : Trained on the Pile for 333B tokens.
|
29 |
+
* Pile loss 2.2621
|
30 |
+
* LAMBADA ppl 13.04, acc 45.16%
|
31 |
+
* PIQA acc 67.52%
|
32 |
+
* SC2016 acc 63.87%
|
33 |
+
* Hellaswag acc_norm 40.90%
|