Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,13 @@ inference: false
|
|
11 |
<h1 style="text-align: center">Pygmalion 7B</h1>
|
12 |
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
## Model Details
|
15 |
|
16 |
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
|
|
|
11 |
<h1 style="text-align: center">Pygmalion 7B</h1>
|
12 |
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
|
13 |
|
14 |
+
> Currently KoboldCPP is unable to stop inference when an EOS token is emitted, which causes the model to devolve into gibberish,
|
15 |
+
>
|
16 |
+
> Pygmalion 7B is now fixed on the dev branch of KoboldCPP, which has fixed the EOS issue. Make sure you're compiling the latest version, it was fixed only a after this model was released;
|
17 |
+
>
|
18 |
+
> When running KoboldCPP, you will need to add the --unbantokens flag for this model to behave properly.
|
19 |
+
|
20 |
+
|
21 |
## Model Details
|
22 |
|
23 |
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
|