|
--- |
|
base_model: IlyaGusev/saiga2_70b_lora |
|
inference: false |
|
license: llama2 |
|
model_creator: IlyaGusev |
|
model_name: Saiga2 70B AWQ |
|
model_type: llama |
|
quantized_by: Komposter43 |
|
language: |
|
- ru |
|
--- |
|
|
|
# Saiga2 70B - AWQ, Russian LLaMA2-based chatbot |
|
- Model creator: [IlyaGusev](https://huggingface.co/IlyaGusev) |
|
- Original model: [Saiga2 70B](https://huggingface.co/IlyaGusev/saiga2_70b_lora) |
|
|
|
<!-- description start --> |
|
## Description |
|
|
|
This repo contains AWQ model files for [Saiga2 70B](https://huggingface.co/IlyaGusev/saiga2_70b_lora) |
|
|
|
### About AWQ |
|
|
|
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. |
|
|
|
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. |
|
<!-- description end --> |