File size: 1,035 Bytes
332d34b
b45331b
5737596
332d34b
 
 
 
524859e
 
332d34b
524859e
 
 
 
 
5737596
332d34b
 
 
 
7762feb
6ba7f97
c4f69dc
6ba7f97
5737596
332d34b
7584ef5
69bb3f8
27a9037
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
base_model: THUDM/glm-4-9b-chat
pipeline_tag: text-generation
license: other
license_name: glm-4
license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
- chat
- abliterated
library_name: transformers
---



# glm-4-9b-chat-abliterated

## Version 1.1 (Updated 9/1/2024): Layer 17 is used for abliteration instead of 16. Refusal mitigation tends to work better with this layer. PCA and cosine similarity tests seem to agree.

Check out the <a href="https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated/blob/main/abliterate-glm-4-9b-chat.ipynb">jupyter notebook</a> for details of how this model was abliterated from glm-4-9b-chat.

The python package "tiktoken" is required to quantize the model into gguf format. So I had to create <a href="https://huggingface.co/spaces/byroneverson/gguf-my-repo-plus-tiktoken">a fork of GGUF My Repo (+tiktoken)</a>.

![Logo](https://huggingface.co/byroneverson/glm-4-9b-chat-abliterated/resolve/main/logo.png "Logo")