elysiantech/gemma-2b-gptq-4bit
gemma-2b-gptq-4bit is a version of the 2B base model model that was quantized using the GPTQ method developed by Lin et al. (2023).
Please refer to the Original Gemma Model Card for details about the model preparation and training processes.
Dependencies
auto-gptq
– AutoGPTQ was used to quantize the phi-3 model.vllm==0.4.2
– vLLM was used to host models for benchmarking.
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.