Text Generation
Transformers
Safetensors
PyTorch
English
mistral
finetuned
quantized
4-bit precision
AWQ
conversational
Inference Endpoints
text-generation-inference
awq
Suparious commited on
Commit
709d937
1 Parent(s): c60ab08

Update model card

Browse files
Files changed (1) hide show
  1. README.md +107 -1
README.md CHANGED
@@ -16,10 +16,16 @@ tags:
16
  - region:us
17
  license: apache-2.0
18
  base_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo
 
 
19
  datasets:
 
 
 
 
 
20
  - ise-uiuc/Magicoder-Evol-Instruct-110K
21
  - LDJnr/Capybara
22
- - teknium/openhermes
23
  library_name: transformers
24
  model_creator: cognitivecomputations
25
  model_name: Dolphin Mistral 7B v2.6 - AWQ
@@ -39,3 +45,103 @@ prompt_template: '<|im_start|>system
39
  '
40
  quantized_by: Suparious
41
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - region:us
17
  license: apache-2.0
18
  base_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo
19
+ language:
20
+ - en
21
  datasets:
22
+ - ehartford/dolphin
23
+ - jondurbin/airoboros-2.2.1
24
+ - ehartford/dolphin-coder
25
+ - teknium/openhermes
26
+ - ise-uiuc/Magicoder-OSS-Instruct-75K
27
  - ise-uiuc/Magicoder-Evol-Instruct-110K
28
  - LDJnr/Capybara
 
29
  library_name: transformers
30
  model_creator: cognitivecomputations
31
  model_name: Dolphin Mistral 7B v2.6 - AWQ
 
45
  '
46
  quantized_by: Suparious
47
  ---
48
+ # Dolphin Mistral 7B v2.6 DPO laser - AWQ
49
+
50
+ - Model creator: [cognitivecomputations](https://huggingface.co/cognitivecomputations)
51
+ - Original model: [WestLake 7B v2](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
52
+
53
+ This model's training was sponsored by [convai](https://www.convai.com/).
54
+
55
+ This model is based on Mistral-7b
56
+
57
+ The base model has 16k context
58
+
59
+ This is a special release of Dolphin-DPO based on the LASER [paper](https://arxiv.org/pdf/2312.13558.pdf) and implementation by @fernandofernandes assisted by @ehartford
60
+
61
+ ```plaintext
62
+ @article{sharma2023truth,
63
+ title={The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction},
64
+ author={Sharma, Pratyusha and Ash, Jordan T and Misra, Dipendra},
65
+ journal={arXiv preprint arXiv:2312.13558},
66
+ year={2023} }
67
+ ```
68
+
69
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png)
70
+
71
+ ## Model description
72
+
73
+ This repo contains AWQ model files for [cognitivecomputations's Dolphin Mistral 7B v2.6](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser).
74
+
75
+ These files were quantised using hardware kindly provided by [SolidRusT Networks](https://solidrust.net/).
76
+
77
+ ## How to use
78
+
79
+ ### Install the necessary packages
80
+
81
+ ```bash
82
+ pip install --upgrade autoawq autoawq-kernels
83
+ ```
84
+
85
+ ### Example Python code
86
+
87
+ ```python
88
+ from awq import AutoAWQForCausalLM
89
+ from transformers import AutoTokenizer, TextStreamer
90
+
91
+ model_path = "solidrust/samantha-1.1-westlake-7b-laser-AWQ"
92
+ system_message = "You are Dolphin, a helpful AI assistant."
93
+
94
+ # Load model
95
+ model = AutoAWQForCausalLM.from_quantized(model_path,
96
+ fuse_layers=True)
97
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
98
+ trust_remote_code=True)
99
+ streamer = TextStreamer(tokenizer,
100
+ skip_prompt=True,
101
+ skip_special_tokens=True)
102
+
103
+ # Convert prompt to tokens
104
+ prompt_template = """\
105
+ <|im_start|>system
106
+ {system_message}<|im_end|>
107
+ <|im_start|>user
108
+ {prompt}<|im_end|>
109
+ <|im_start|>assistant"""
110
+
111
+ prompt = "You're standing on the surface of the Earth. "\
112
+ "You walk one mile south, one mile west and one mile north. "\
113
+ "You end up exactly where you started. Where are you?"
114
+
115
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
116
+ return_tensors='pt').input_ids.cuda()
117
+
118
+ # Generate output
119
+ generation_output = model.generate(tokens,
120
+ streamer=streamer,
121
+ max_new_tokens=512)
122
+
123
+ ```
124
+
125
+ ### About AWQ
126
+
127
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
128
+
129
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
130
+
131
+ It is supported by:
132
+
133
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
134
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
135
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
136
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
137
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
138
+
139
+ ## Prompt template: ChatML
140
+
141
+ ```plaintext
142
+ <|im_start|>system
143
+ {system_message}<|im_end|>
144
+ <|im_start|>user
145
+ {prompt}<|im_end|>
146
+ <|im_start|>assistant
147
+ ```