rwkv-4-pile-7b / README.md
BlinkDL's picture
Update README.md
9024fcb
|
raw
history blame
1.39 kB
metadata
language:
  - en
tags:
  - pytorch
  - text-generation
  - causal-lm
  - rwkv
license: apache-2.0
datasets:
  - the_pile

RWKV-4 7B

Model Description

RWKV-4 7B is a L32-D4096 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.

Use https://github.com/BlinkDL/ChatRWKV to run it.

ctx_len = 1024 n_layer = 32 n_embd = 4096

RWKV-4-Pile-7B-20230109-ctx4096.pth : Fine-tuned to ctx_len 4096.

  • Likely the best. Please test.

RWKV-4-Pile-7B-20230xxx-ctx8192-testxxx : Fine-tuned to ctx_len 8192.

  • Slightly weaker than ctx4096 model when ctxlen < 3k.

RWKV-4-Pile-7B-20221115-8047.pth : Trained on the Pile for 332B tokens.

  • Pile loss 1.8415T
  • LAMBADA ppl 4.38, acc 67.18%
  • PIQA acc 76.06%
  • SC2016 acc 73.44%
  • Hellaswag acc_norm 65.51%

Instruct-test models: only useful if you construct your prompt following dataset templates

Note I am using "Q: instruct\n\nA: result" prompt for all instructs.

RWKV-4-Pile-7B-Instruct-test1 instruct-tuned on https://huggingface.co/datasets/bigscience/xP3all/viewer/en/train

RWKV-4-Pile-7B-Instruct-test2 instruct-tuned on https://huggingface.co/datasets/Muennighoff/flan & NIv2

Chinese models

RWKV-4-Pile-7B-EngChn-testNovel-xxx for writing Chinese novels (trained on 200G Chinese novels.)

RWKV-4-Pile-7B-EngChn-testxxx for Chinese Q&A (trained on 10G Chinese text. only for testing purposes.)