byroneverson
commited on
Commit
•
6ba7f97
1
Parent(s):
0598906
Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,9 @@ library_name: transformers
|
|
19 |
|
20 |
|
21 |
# glm-4-9b-chat-abliterated
|
|
|
|
|
|
|
22 |
Check out the <a href="https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated/blob/main/abliterate-glm-4-9b-chat.ipynb">jupyter notebook</a> for details of how this model was abliterated from glm-4-9b-chat.
|
23 |
|
24 |
The python package "tiktoken" is required to quantize the model into gguf format. So I had to create <a href="https://huggingface.co/spaces/byroneverson/gguf-my-repo-plus-tiktoken">a fork of GGUF My Repo (+tiktoken)</a>.
|
|
|
19 |
|
20 |
|
21 |
# glm-4-9b-chat-abliterated
|
22 |
+
|
23 |
+
## Updated 9/1/2024: Layer 17 is used for abliteration instead of 16. Refusal mitigation tends to work better with this layer. PCA and cosine similarity tests seem to agree.
|
24 |
+
|
25 |
Check out the <a href="https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated/blob/main/abliterate-glm-4-9b-chat.ipynb">jupyter notebook</a> for details of how this model was abliterated from glm-4-9b-chat.
|
26 |
|
27 |
The python package "tiktoken" is required to quantize the model into gguf format. So I had to create <a href="https://huggingface.co/spaces/byroneverson/gguf-my-repo-plus-tiktoken">a fork of GGUF My Repo (+tiktoken)</a>.
|