Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,8 @@ I had to lower max_positional_embeddings in config.json and model_max_length for
|
|
21 |
This attempt had both max_position_embeddings and model_max_length set to 4096, which worked perfectly fine. I then reversed this to 200000 once I was uploading it.
|
22 |
I think it should keep long context capabilities of the base model.
|
23 |
|
|
|
|
|
24 |
## Quants!
|
25 |
|
26 |
EXL2 quants coming soon, I think I will start by uploading 4bpw quant in a few days.
|
|
|
21 |
This attempt had both max_position_embeddings and model_max_length set to 4096, which worked perfectly fine. I then reversed this to 200000 once I was uploading it.
|
22 |
I think it should keep long context capabilities of the base model.
|
23 |
|
24 |
+
In my testing it seems less unhinged than adamo1139/Yi-34b-200K-AEZAKMI-RAW-TOXIC-2702 and maybe a touch less uncensored, but still very much uncensored even with default system prompt "A chat."
|
25 |
+
|
26 |
## Quants!
|
27 |
|
28 |
EXL2 quants coming soon, I think I will start by uploading 4bpw quant in a few days.
|