metadata
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- FelixChao/WestSeverus-7B-DPO-v2
- FelixChao/Faraday-7B
base_model:
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- FelixChao/WestSeverus-7B-DPO-v2
- FelixChao/Faraday-7B
pipeline_tag: text-generation
model_type: mistral
model_name: Darcy-7b
model_creator: gmonsoon
quantized_by: Suparious
Darcy-7b - AWQ
Model description
Darcy-7b is a merge of the following models using LazyMergekit.
About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code