Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
l3utterfly
/
Qwen1.5-1.8B-layla-v4-gguf
like
5
GGUF
Inference Endpoints
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Qwen1.5-1.8B-layla-v4-gguf
1 contributor
History:
3 commits
l3utterfly
Add quantised GGUF files
86b7117
6 months ago
.gitattributes
Safe
1.56 kB
cradle
11 months ago
Qwen1.5-1.8B-layla-v4-Q2_K.gguf
Safe
847 MB
LFS
Add quantised GGUF files
6 months ago
Qwen1.5-1.8B-layla-v4-Q4_0_4_4.gguf
Safe
1.12 GB
LFS
Add quantised GGUF files
6 months ago
Qwen1.5-1.8B-layla-v4-Q4_0_4_8.gguf
Safe
1.12 GB
LFS
Add quantised GGUF files
6 months ago
Qwen1.5-1.8B-layla-v4-Q4_0_8_8.gguf
Safe
1.12 GB
LFS
Add quantised GGUF files
6 months ago
Qwen1.5-1.8B-layla-v4-Q4_K.gguf
Safe
1.22 GB
LFS
cradle
11 months ago
Qwen1.5-1.8B-layla-v4-Q4_K_M.gguf
Safe
1.22 GB
LFS
Add quantised GGUF files
6 months ago
Qwen1.5-1.8B-layla-v4-Q5_K.gguf
Safe
1.38 GB
LFS
cradle
11 months ago
Qwen1.5-1.8B-layla-v4-Q6_K.gguf
Safe
1.58 GB
LFS
Add quantised GGUF files
6 months ago
Qwen1.5-1.8B-layla-v4-Q8_0.gguf
Safe
1.96 GB
LFS
Add quantised GGUF files
6 months ago
Qwen1.5-1.8B-layla-v4-f16.gguf
Safe
3.68 GB
LFS
Add quantised GGUF files
6 months ago
README.md
Safe
28 Bytes
initial commit
11 months ago