File size: 1,222 Bytes
39cbe88 b191360 0e8912f b191360 d2f3da9 b191360 39cbe88 b191360 a0a37ef a4cf5a5 b191360 6580e48 b191360 d2f3da9 99dc399 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- the_pile
---
# RWKV-4 430M
# Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing.
# Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing.
# Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing.
## Model Description
RWKV-4 430M is a L24-D1024 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
Use https://github.com/BlinkDL/ChatRWKV to run it.
ctx_len = 1024
n_layer = 24
n_embd = 1024
Final checkpoint:
RWKV-4-Pile-430M-20220808-8066.pth : Trained on the Pile for 333B tokens.
* Pile loss 2.2621
* LAMBADA ppl 13.04, acc 45.16%
* PIQA acc 67.52%
* SC2016 acc 63.87%
* Hellaswag acc_norm 40.90%
With tiny attention (--tiny_att_dim 512 --tiny_att_layer 18):
RWKV-4a-Pile-433M-20221223-8039.pth
* Pile loss 2.2394
* LAMBADA ppl 10.54, acc 50.20%
* PIQA acc 68.12%
* SC2016 acc 63.55%
* Hellaswag acc_norm 40.82%
RWKV-4b-Pile-436M-20230211-8012.pth (--my_testing 'a')
* Pile loss 2.2026
* LAMBADA ppl 10.48, acc 51.35%
* PIQA acc 68.06%
* SC2016 acc 63.17%
* Hellaswag acc_norm 42.09%
|