byroneverson's picture
Update README.md
c4f69dc verified
|
raw
history blame
1.04 kB
metadata
base_model: THUDM/glm-4-9b-chat
pipeline_tag: text-generation
license: other
license_name: glm-4
license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE
language:
  - zh
  - en
tags:
  - glm
  - chatglm
  - thudm
  - chat
  - abliterated
library_name: transformers

glm-4-9b-chat-abliterated

Version 1.1 (Updated 9/1/2024): Layer 17 is used for abliteration instead of 16. Refusal mitigation tends to work better with this layer. PCA and cosine similarity tests seem to agree.

Check out the jupyter notebook for details of how this model was abliterated from glm-4-9b-chat.

The python package "tiktoken" is required to quantize the model into gguf format. So I had to create a fork of GGUF My Repo (+tiktoken).

Logo