digitous commited on
Commit
b7ca89b
1 Parent(s): 539f5f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -12,16 +12,15 @@ Below is a use case example of using Alpac(ino) in either Text-Generation-WebUI
12
  Enable chat mode, name the user Player and the AI Narrator, tailor the instructions below as desired and paste in context/memory field-
13
 
14
 
15
- \#\#\# Instruction: (carriage return)
16
  Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response.
17
  Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and
18
  whatever quest or other information to keep consistent in the interaction).
19
- \#\#\# Response:
20
- (carriage return after response)
21
 
22
- Testing subjectively suggests ideal presets (for both TGUI and KAI) are "Storywriter" (temp raised to 1.1) or "Godlike" with context tokens
23
- set to 2048 and max generation tokens ~680 or greater. This model is intelligent enough to determine when to stop writing and will rarely
24
- use half as many tokens, however verbosity will vary depending on provided instructions.
25
 
26
  Legalese:
27
  This release contains modified weights from the LLaMA-13b model. This model is under a non-commercial
 
12
  Enable chat mode, name the user Player and the AI Narrator, tailor the instructions below as desired and paste in context/memory field-
13
 
14
 
15
+ \#\#\# Instruction:(carriage return)
16
  Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response.
17
  Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and
18
  whatever quest or other information to keep consistent in the interaction).
19
+ \#\#\# Response:(carriage return)
 
20
 
21
+ Testing subjectively suggests ideal presets for both TGUI and KAI are "Storywriter" (temp raised to 1.1) or "Godlike" with context tokens
22
+ at 2048 and max generation tokens at ~680 or greater. This model is intelligent enough to determine when to stop writing and will rarely
23
+ use half as many tokens.
24
 
25
  Legalese:
26
  This release contains modified weights from the LLaMA-13b model. This model is under a non-commercial