Suparious commited on
Commit
6c3487f
1 Parent(s): abca7c4

add a model card

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md CHANGED
@@ -1,3 +1,45 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - merge
5
+ - mergekit
6
+ - lazymergekit
7
+ - macadeliccc/WestLake-7B-v2-laser-truthy-dpo
8
+ - FelixChao/WestSeverus-7B-DPO-v2
9
+ - FelixChao/Faraday-7B
10
+ base_model:
11
+ - macadeliccc/WestLake-7B-v2-laser-truthy-dpo
12
+ - FelixChao/WestSeverus-7B-DPO-v2
13
+ - FelixChao/Faraday-7B
14
+ pipeline_tag: text-generation
15
+ model_type: mistral
16
+ model_name: Darcy-7b
17
+ model_creator: gmonsoon
18
+ quantized_by: Suparious
19
  ---
20
+ # Darcy-7b - AWQ
21
+
22
+ - Model creator: [gmonsoon](https://huggingface.co/gmonsoon)
23
+ - Original model: [Darcy-7b](https://huggingface.co/gmonsoon/Darcy-7b)
24
+
25
+ ## Model description
26
+
27
+ Darcy-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing).
28
+
29
+ - [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
30
+ - [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
31
+ - [FelixChao/Faraday-7B](https://huggingface.co/FelixChao/Faraday-7B)
32
+
33
+ ### About AWQ
34
+
35
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
36
+
37
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
38
+
39
+ It is supported by:
40
+
41
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
42
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
43
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
44
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
45
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code