Bitsandbytes quantization of https://huggingface.co/bigcode/starcoder2-7b.

See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch

nf4_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_use_double_quant=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-7b", quantization_config=nf4_config)
tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder2-7b")

model.push_to_hub("onekq-ai/starcoder2-7b-bnb-4bit")
tokenizer.push_to_hub("onekq-ai/starcoder2-7b-bnb-4bit")
Downloads last month
10
Safetensors
Model size
3.81B params
Tensor type
F32
·
FP16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for onekq-ai/starcoder2-7b-bnb-4bit

Quantized
(5)
this model

Collection including onekq-ai/starcoder2-7b-bnb-4bit