Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
alexwww94
/
glm-4v-9b-gptq-4bit
like
5
Safetensors
Chinese
English
chatglm
glm-4v
quantization
auto-gptq
4bit
custom_code
4-bit precision
gptq
License:
other
Model card
Files
Files and versions
Community
2
5d92e86
glm-4v-9b-gptq-4bit
1 contributor
History:
4 commits
alexwww94
Rename gptq_model-4bit-128g.safetensors to model.safetensors
5d92e86
verified
2 months ago
.gitattributes
Safe
1.52 kB
initial commit
2 months ago
config.json
Safe
1.77 kB
Upload 9 files
2 months ago
configuration.json
Safe
36 Bytes
Upload 9 files
2 months ago
configuration_chatglm.py
Safe
2.57 kB
Upload 9 files
2 months ago
generation_config.json
Safe
205 Bytes
Upload 9 files
2 months ago
model.safetensors
Safe
9.14 GB
LFS
Rename gptq_model-4bit-128g.safetensors to model.safetensors
2 months ago
modeling_chatglm.py
Safe
57 kB
Upload 9 files
2 months ago
quantize_config.json
Safe
268 Bytes
Upload folder using huggingface_hub
2 months ago
tokenization_chatglm.py
Safe
17.5 kB
Upload 9 files
2 months ago
tokenizer.model
Safe
2.62 MB
LFS
Upload 9 files
2 months ago
tokenizer_config.json
Safe
3.22 kB
Upload 9 files
2 months ago
visual.py
Safe
7 kB
Upload 9 files
2 months ago