Text Generation
Transformers
GGUF
English
code
Eval Results
Inference Endpoints
Edit model card

Llamacpp Quantizations of speechless-starcoder2-7b

Using llama.cpp release b2354 for quantization.

Original model: https://huggingface.co/uukuguy/speechless-starcoder2-7b

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
speechless-starcoder2-7b-Q8_0.gguf Q8_0 7.62GB Extremely high quality, generally unneeded but max available quant.
speechless-starcoder2-7b-Q6_K.gguf Q6_K 5.89GB Very high quality, near perfect, recommended.
speechless-starcoder2-7b-Q5_K_M.gguf Q5_K_M 5.12GB High quality, very usable.
speechless-starcoder2-7b-Q5_K_S.gguf Q5_K_S 4.93GB High quality, very usable.
speechless-starcoder2-7b-Q5_0.gguf Q5_0 4.93GB High quality, older format, generally not recommended.
speechless-starcoder2-7b-Q4_K_M.gguf Q4_K_M 4.40GB Good quality, similar to 4.25 bpw.
speechless-starcoder2-7b-Q4_K_S.gguf Q4_K_S 4.12GB Slightly lower quality with small space savings.
speechless-starcoder2-7b-Q4_0.gguf Q4_0 4.04GB Decent quality, older format, generally not recommended.
speechless-starcoder2-7b-Q3_K_L.gguf Q3_K_L 3.98GB Lower quality but usable, good for low RAM availability.
speechless-starcoder2-7b-Q3_K_M.gguf Q3_K_M 3.59GB Even lower quality.
speechless-starcoder2-7b-Q3_K_S.gguf Q3_K_S 3.09GB Low quality, not recommended.
speechless-starcoder2-7b-Q2_K.gguf Q2_K 2.72GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
395
GGUF
Model size
7.17B params
Architecture
starcoder2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train bartowski/speechless-starcoder2-7b-GGUF

Evaluation results