Text Generation
PyTorch
causal-lm
rwkv
BlinkDL commited on
Commit
88df261
1 Parent(s): 6f726bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -35,16 +35,16 @@ How to use:
35
 
36
  The difference between World & Raven:
37
  * set pipeline = PIPELINE(model, "rwkv_vocab_v20230424") instead of 20B_tokenizer.json (EXACTLY AS WRITTEN HERE. "rwkv_vocab_v20230424" is included in rwkv 0.7.4+)
38
- * use Q/A or User/AI or Human/Bot prompt. **DO NOT USE Bob/Alice**
39
  * use **fp32** (will overflow in fp16 at this moment - fixable in future)
40
 
41
  A good prompt example:
42
  ```
43
- Q: hi
44
 
45
- A: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
46
 
47
- Q: xxxxxx
48
 
49
- A:
50
  ```
 
35
 
36
  The difference between World & Raven:
37
  * set pipeline = PIPELINE(model, "rwkv_vocab_v20230424") instead of 20B_tokenizer.json (EXACTLY AS WRITTEN HERE. "rwkv_vocab_v20230424" is included in rwkv 0.7.4+)
38
+ * use Question/Answer or User/AI or Human/Bot prompt. **DO NOT USE Bob/Alice or Q/A**
39
  * use **fp32** (will overflow in fp16 at this moment - fixable in future)
40
 
41
  A good prompt example:
42
  ```
43
+ Question: hi
44
 
45
+ Answer: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
46
 
47
+ Question: xxxxxx
48
 
49
+ Answer:
50
  ```