File size: 2,236 Bytes
39a408e
ce85a69
 
 
 
847ce1c
 
39a408e
 
941fc3f
 
 
 
 
 
61a9de9
39a408e
941fc3f
 
39a408e
 
 
 
 
 
941fc3f
e522d91
941fc3f
e522d91
 
39a408e
61a9de9
39a408e
e522d91
61a9de9
e522d91
ccaeeeb
e522d91
61a9de9
 
 
39a408e
e522d91
 
 
 
 
 
 
 
61a9de9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
tags:
- manticore
- guanaco
- uncensored
library_name: transformers
pipeline_tag: text-generation
---
---
# 4bit GPTQ of:
Manticore-13b-Chat-Pyg by [openaccess-ai-collective](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg) with the Guanaco 13b qLoRa by [TimDettmers](https://huggingface.co/timdettmers/guanaco-13b) applied through [Monero](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco), quantized by [mindrage](https://huggingface.co/mindrage), uncensored


[link to GGML Version](https://huggingface.co/mindrage/Manticore-13B-Chat-Pyg-Guanaco-GGML-q4_0)

---


Quantized to 4bit GPTQ, groupsize 128, no-act-order.

Command used to quantize: 
python3 llama.py Manticore-13B-Chat-Pyg-Guanaco-GPTQ-4bit-128g.no-act-order.safetensors c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors



The model seems to have noticeably benefited from further augmentation with the Guanaco qLora.
Its capabilities seem broad, even compared with other Wizard or Manticore models, with expected weaknesses at coding. It is very good at in-context-learning and (in its class) reasoning.
It both follows instructions well, and can be used as a chatbot.
Refreshingly, it does not seem to insist on aggressively sticking to narratives to justify formerly hallucinated output as much as similar models. It's output seems... eerily smart at times.
I believe the model is fully unrestricted/uncensored and will generally not berate.

---

Prompting style + settings:
---
Presumably due to the very diverse training-data the model accepts a variety of prompting styles with relatively few issues, including the ###-Variant, but seems to work best using:
# "Naming" the model works great by simply modifying the context. Substantial changes in its behaviour can be caused by appending to "ASSISTANT:", like "ASSISTANT: After careful consideration, thinking step-by-step, my response is:"

user: "USER:" - 
bot: "ASSISTANT:" - 
context: "This is a conversation between an advanced AI and a human user."

Turn Template: <|user|> <|user-message|>\n<|bot|><|bot-message|>\n

Settings that work well without (subjectively) being too deterministic:

temp: 0.15 - 
top_p: 0.1 - 
top_k: 40 - 
rep penalty: 1.1
---