catid's picture
specific version
94b862b
|
raw
history blame
565 Bytes
AI Model Name: Llama 3 8B "Built with Meta Llama 3" https://llama.meta.com/llama3/license/
This is the result of running AutoAWQ to quantize the LLaMA-3 8B model to ~4 bits/parameter.
To launch an OpenAI-compatible API endpoint on your Linux server:
```
git lfs install
git clone https://huggingface.co/catid/cat-llama-3-8b-awq-q128-w4-gemm
conda create -n vllm8 python=3.10 -y && conda activate vllm8
pip install -U git+https://github.com/vllm-project/vllm.git@a134ef6
python -m vllm.entrypoints.openai.api_server --model cat-llama-3-8b-awq-q128-w4-gemm
```