glm-4-9b-chat-GGUF / README.md
legraphista's picture
Upload README.md with huggingface_hub
4073260 verified
|
raw
history blame
5.59 kB
---
base_model: THUDM/glm-4-9b-chat
inference: false
language:
- zh
- en
library_name: gguf
license: other
license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE
license_name: glm-4
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- glm
- chatglm
- thudm
- quantized
- GGUF
- quantization
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
---
# glm-4-9b-chat-GGUF
_Llama.cpp static quantization of THUDM/glm-4-9b-chat_
Original Model: [THUDM/glm-4-9b-chat](https://huggingface.co/THUDM/glm-4-9b-chat)
Original dtype: `BF16` (`bfloat16`)
Quantized by: [https://github.com/ggerganov/llama.cpp/pull/6999](https://github.com/ggerganov/llama.cpp/pull/6999)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| glm-4-9b-chat.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
| [glm-4-9b-chat.Q6_K.gguf](https://huggingface.co/legraphista/glm-4-9b-chat-GGUF/blob/main/glm-4-9b-chat.Q6_K.gguf) | Q6_K | 8.26GB | ✅ Available | ⚪ Static | 📦 No
| [glm-4-9b-chat.Q4_K.gguf](https://huggingface.co/legraphista/glm-4-9b-chat-GGUF/blob/main/glm-4-9b-chat.Q4_K.gguf) | Q4_K | 6.25GB | ✅ Available | ⚪ Static | 📦 No
| glm-4-9b-chat.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | -
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| glm-4-9b-chat.BF16 | BF16 | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.FP16 | F16 | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
| [glm-4-9b-chat.Q6_K.gguf](https://huggingface.co/legraphista/glm-4-9b-chat-GGUF/blob/main/glm-4-9b-chat.Q6_K.gguf) | Q6_K | 8.26GB | ✅ Available | ⚪ Static | 📦 No
| glm-4-9b-chat.Q5_K | Q5_K | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.Q5_K_S | Q5_K_S | - | ⏳ Processing | ⚪ Static | -
| [glm-4-9b-chat.Q4_K.gguf](https://huggingface.co/legraphista/glm-4-9b-chat-GGUF/blob/main/glm-4-9b-chat.Q4_K.gguf) | Q4_K | 6.25GB | ✅ Available | ⚪ Static | 📦 No
| glm-4-9b-chat.Q4_K_S | Q4_K_S | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.IQ4_NL | IQ4_NL | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.IQ4_XS | IQ4_XS | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.Q3_K_L | Q3_K_L | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.Q3_K_S | Q3_K_S | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.IQ3_M | IQ3_M | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.IQ3_S | IQ3_S | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.IQ3_XS | IQ3_XS | - | ⏳ Processing | ⚪ Static | -
| glm-4-9b-chat.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | -
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/glm-4-9b-chat-GGUF --include "glm-4-9b-chat.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/glm-4-9b-chat-GGUF --include "glm-4-9b-chat.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
[gMASK]<sop><|user|>
{user_prompt}<|assistant|>
{assistant_response}<|user|>
{next_user_prompt}
```
### Chat template with system prompt
```
[gMASK]<sop><|system|>
{system_prompt}<|user|>
{user_prompt}<|assistant|>
{assistant_response}<|user|>
{next_user_prompt}
```
### Llama.cpp
```
llama.cpp/main -m glm-4-9b-chat.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `glm-4-9b-chat.Q8_0`)
3. Run `gguf-split --merge glm-4-9b-chat.Q8_0/glm-4-9b-chat.Q8_0-00001-of-XXXXX.gguf glm-4-9b-chat.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!