Text Generation
Transformers
Safetensors
PyTorch
mistral
finetuned
quantized
4-bit precision
AWQ
instruct
conversational
Inference Endpoints
text-generation-inference
finetune
chatml
DPO
RLHF
gpt4
synthetic data
distillation
awq
bagel-7b-v0.5-AWQ / quant_config.json
Shaun Prince
adding quant config
2d76893
{
"zero_point": true,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}