--- base_model: senseable/WestLake-7B-v2 license: apache-2.0 language: - en library_name: transformers model_creator: Common Sense model_name: WestLake 7B v2 model_type: mistral prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: Suparious --- # WestLake 7B v2 - AWQ - Model creator: [Common Sense](https://huggingface.co/senseable) - Original model: [WestLake 7B v2](https://huggingface.co/senseable/WestLake-7B-v2) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585ffb10eeafbd678d4b3fe/jnqnl8a_zYYMqJoBpX8yS.png) ## Model description This repo contains AWQ model files for [Common Sense's WestLake 7B v2](https://huggingface.co/senseable/WestLake-7B-v2). These files were quantised using hardware kindly provided by [SolidRusT Networks](https://solidrust.net/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```