4-bit GPTQ quantized version of DeepSeek-R1-Distill-Qwen-14B for inference with the Private LLM app.

Downloads last month
0
Inference Examples
Inference API (serverless) does not yet support mlc-llm models for this pipeline type.

Model tree for numen-tech/DeepSeek-R1-Distill-Qwen-14B-GPTQ-Int4

Quantized
(61)
this model