TheBloke commited on
Commit
585792e
โ€ข
1 Parent(s): 1c8a3df

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +403 -0
README.md ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: https://huggingface.co/mrm8488/llama-2-coder-7b
3
+ datasets:
4
+ - HuggingFaceH4/CodeAlpaca_20K
5
+ inference: false
6
+ language:
7
+ - code
8
+ license: apache-2.0
9
+ model-index:
10
+ - name: FalCoder
11
+ results: []
12
+ model_creator: mrm8488
13
+ model_name: Llama 2 Coder 7B
14
+ model_type: llama
15
+ pipeline_tag: text-generation
16
+ prompt_template: 'You are a coding assistant that will help the user to resolve the
17
+ following instruction:
18
+
19
+ ### Instruction: {prompt}
20
+
21
+
22
+ ### Solution:
23
+
24
+ '
25
+ quantized_by: TheBloke
26
+ tags:
27
+ - generated_from_trainer
28
+ - code
29
+ - coding
30
+ - llama
31
+ thumbnail: https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png
32
+ ---
33
+
34
+ <!-- header start -->
35
+ <!-- 200823 -->
36
+ <div style="width: auto; margin-left: auto; margin-right: auto">
37
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
38
+ </div>
39
+ <div style="display: flex; justify-content: space-between; width: 100%;">
40
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
41
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
42
+ </div>
43
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
44
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
45
+ </div>
46
+ </div>
47
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
48
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
49
+ <!-- header end -->
50
+
51
+ # Llama 2 Coder 7B - AWQ
52
+ - Model creator: [mrm8488](https://huggingface.co/mrm8488)
53
+ - Original model: [Llama 2 Coder 7B](https://huggingface.co/mrm8488/llama-2-coder-7b)
54
+
55
+ <!-- description start -->
56
+ ## Description
57
+
58
+ This repo contains AWQ model files for [mrm8488's Llama 2 Coder 7B](https://huggingface.co/mrm8488/llama-2-coder-7b).
59
+
60
+
61
+ ### About AWQ
62
+
63
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
64
+
65
+ It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
66
+ <!-- description end -->
67
+ <!-- repositories-available start -->
68
+ ## Repositories available
69
+
70
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-Coder-7B-AWQ)
71
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GPTQ)
72
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-Coder-7B-GGUF)
73
+ * [mrm8488's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mrm8488/llama-2-coder-7b)
74
+ <!-- repositories-available end -->
75
+
76
+ <!-- prompt-template start -->
77
+ ## Prompt template: CodingAssistant
78
+
79
+ ```
80
+ You are a coding assistant that will help the user to resolve the following instruction:
81
+ ### Instruction: {prompt}
82
+
83
+ ### Solution:
84
+
85
+ ```
86
+
87
+ <!-- prompt-template end -->
88
+ <!-- licensing start -->
89
+ ## Licensing
90
+
91
+ The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
92
+
93
+ As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
94
+
95
+ In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [mrm8488's Llama 2 Coder 7B](https://huggingface.co/mrm8488/llama-2-coder-7b).
96
+ <!-- licensing end -->
97
+ <!-- README_AWQ.md-provided-files start -->
98
+ ## Provided files and AWQ parameters
99
+
100
+ For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
101
+
102
+ Models are released as sharded safetensors files.
103
+
104
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
105
+ | ------ | ---- | -- | ----------- | ------- | ---- |
106
+ | [main](https://huggingface.co/TheBloke/Llama-2-Coder-7B-AWQ/tree/main) | 4 | 128 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 3.89 GB
107
+
108
+ <!-- README_AWQ.md-provided-files end -->
109
+
110
+ <!-- README_AWQ.md-use-from-vllm start -->
111
+ ## Serving this model from vLLM
112
+
113
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
114
+
115
+ - When using vLLM as a server, pass the `--quantization awq` parameter, for example:
116
+
117
+ ```shell
118
+ python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-Coder-7B-AWQ --quantization awq
119
+ ```
120
+
121
+ When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
122
+
123
+ ```python
124
+ from vllm import LLM, SamplingParams
125
+
126
+ prompts = [
127
+ "Hello, my name is",
128
+ "The president of the United States is",
129
+ "The capital of France is",
130
+ "The future of AI is",
131
+ ]
132
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
133
+
134
+ llm = LLM(model="TheBloke/Llama-2-Coder-7B-AWQ", quantization="awq")
135
+
136
+ outputs = llm.generate(prompts, sampling_params)
137
+
138
+ # Print the outputs.
139
+ for output in outputs:
140
+ prompt = output.prompt
141
+ generated_text = output.outputs[0].text
142
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
143
+ ```
144
+ <!-- README_AWQ.md-use-from-vllm start -->
145
+
146
+ <!-- README_AWQ.md-use-from-python start -->
147
+ ## How to use this AWQ model from Python code
148
+
149
+ ### Install the necessary packages
150
+
151
+ Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
152
+
153
+ ```shell
154
+ pip3 install autoawq
155
+ ```
156
+
157
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
158
+
159
+ ```shell
160
+ pip3 uninstall -y autoawq
161
+ git clone https://github.com/casper-hansen/AutoAWQ
162
+ cd AutoAWQ
163
+ pip3 install .
164
+ ```
165
+
166
+ ### You can then try the following example code
167
+
168
+ ```python
169
+ from awq import AutoAWQForCausalLM
170
+ from transformers import AutoTokenizer
171
+
172
+ model_name_or_path = "TheBloke/Llama-2-Coder-7B-AWQ"
173
+
174
+ # Load model
175
+ model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
176
+ trust_remote_code=False, safetensors=True)
177
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
178
+
179
+ prompt = "Tell me about AI"
180
+ prompt_template=f'''You are a coding assistant that will help the user to resolve the following instruction:
181
+ ### Instruction: {prompt}
182
+
183
+ ### Solution:
184
+
185
+ '''
186
+
187
+ print("\n\n*** Generate:")
188
+
189
+ tokens = tokenizer(
190
+ prompt_template,
191
+ return_tensors='pt'
192
+ ).input_ids.cuda()
193
+
194
+ # Generate output
195
+ generation_output = model.generate(
196
+ tokens,
197
+ do_sample=True,
198
+ temperature=0.7,
199
+ top_p=0.95,
200
+ top_k=40,
201
+ max_new_tokens=512
202
+ )
203
+
204
+ print("Output: ", tokenizer.decode(generation_output[0]))
205
+
206
+ # Inference can also be done using transformers' pipeline
207
+ from transformers import pipeline
208
+
209
+ print("*** Pipeline:")
210
+ pipe = pipeline(
211
+ "text-generation",
212
+ model=model,
213
+ tokenizer=tokenizer,
214
+ max_new_tokens=512,
215
+ do_sample=True,
216
+ temperature=0.7,
217
+ top_p=0.95,
218
+ top_k=40,
219
+ repetition_penalty=1.1
220
+ )
221
+
222
+ print(pipe(prompt_template)[0]['generated_text'])
223
+ ```
224
+ <!-- README_AWQ.md-use-from-python end -->
225
+
226
+ <!-- README_AWQ.md-compatibility start -->
227
+ ## Compatibility
228
+
229
+ The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
230
+
231
+ [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
232
+ <!-- README_AWQ.md-compatibility end -->
233
+
234
+ <!-- footer start -->
235
+ <!-- 200823 -->
236
+ ## Discord
237
+
238
+ For further support, and discussions on these models and AI in general, join us at:
239
+
240
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
241
+
242
+ ## Thanks, and how to contribute
243
+
244
+ Thanks to the [chirper.ai](https://chirper.ai) team!
245
+
246
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
247
+
248
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
249
+
250
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
251
+
252
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
253
+
254
+ * Patreon: https://patreon.com/TheBlokeAI
255
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
256
+
257
+ **Special thanks to**: Aemon Algiz.
258
+
259
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค€๊ต ๊น€, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้˜ฟๆ˜Ž, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
260
+
261
+
262
+ Thank you to all my generous patrons and donaters!
263
+
264
+ And thank you again to a16z for their generous grant.
265
+
266
+ <!-- footer end -->
267
+
268
+ # Original model card: mrm8488's Llama 2 Coder 7B
269
+
270
+
271
+ <div style="text-align:center;width:250px;height:250px;">
272
+ <img src="https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png" alt="llama-2 coder logo"">
273
+ </div>
274
+
275
+
276
+ # LlaMa 2 Coder ๐Ÿฆ™๐Ÿ‘ฉโ€๐Ÿ’ป
277
+ **LlaMa-2 7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
278
+
279
+ ## Model description ๐Ÿง 
280
+
281
+ [Llama-2](https://huggingface.co/meta-llama/Llama-2-7b)
282
+
283
+ Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.
284
+ Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
285
+
286
+
287
+ ## Training and evaluation data ๐Ÿ“š
288
+
289
+ [CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
290
+
291
+
292
+ ### Training hyperparameters โš™
293
+
294
+ ```py
295
+ optim="paged_adamw_32bit",
296
+ num_train_epochs = 2,
297
+ eval_steps=50,
298
+ save_steps=50,
299
+ evaluation_strategy="steps",
300
+ save_strategy="steps",
301
+ save_total_limit=2,
302
+ seed=66,
303
+ load_best_model_at_end=True,
304
+ logging_steps=1,
305
+ learning_rate=2e-4,
306
+ fp16=True,
307
+ bf16=False,
308
+ max_grad_norm=0.3,
309
+ warmup_ratio=0.03,
310
+ group_by_length=True,
311
+ lr_scheduler_type="constant"
312
+ ```
313
+
314
+ ### Training results ๐Ÿ—’๏ธ
315
+
316
+
317
+ | Step | Training Loss | Validation Loss |
318
+ |------|----------|----------|
319
+ | 50 | 0.624400 | 0.600070 |
320
+ | 100 | 0.634100 | 0.592757 |
321
+ | 150 | 0.545800 | 0.586652 |
322
+ | 200 | 0.572500 | 0.577525 |
323
+ | 250 | 0.528000 | 0.590118 |
324
+
325
+
326
+ ### Eval results ๐Ÿ“Š
327
+
328
+ WIP
329
+
330
+
331
+ ### Example of usage ๐Ÿ‘ฉโ€๐Ÿ’ป
332
+ ```py
333
+ import torch
334
+ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
335
+
336
+ model_id = "mrm8488/llama-2-coder-7b"
337
+
338
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
339
+
340
+ model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
341
+
342
+ def create_prompt(instruction):
343
+ system = "You are a coding assistant that will help the user to resolve the following instruction:"
344
+ instruction = "### Instruction: " + instruction
345
+ return system + "\n" + instruction + "\n\n" + "### Solution:" + "\n"
346
+
347
+ def generate(
348
+ instruction,
349
+ max_new_tokens=128,
350
+ temperature=0.1,
351
+ top_p=0.75,
352
+ top_k=40,
353
+ num_beams=4,
354
+ **kwargs,
355
+ ):
356
+ prompt = create_prompt(instruction)
357
+ print(prompt)
358
+ inputs = tokenizer(prompt, return_tensors="pt")
359
+ input_ids = inputs["input_ids"].to("cuda")
360
+ attention_mask = inputs["attention_mask"].to("cuda")
361
+ generation_config = GenerationConfig(
362
+ temperature=temperature,
363
+ top_p=top_p,
364
+ top_k=top_k,
365
+ num_beams=num_beams,
366
+ **kwargs,
367
+ )
368
+ with torch.no_grad():
369
+ generation_output = model.generate(
370
+ input_ids=input_ids,
371
+ attention_mask=attention_mask,
372
+ generation_config=generation_config,
373
+ return_dict_in_generate=True,
374
+ output_scores=True,
375
+ max_new_tokens=max_new_tokens,
376
+ early_stopping=True
377
+ )
378
+ s = generation_output.sequences[0]
379
+ output = tokenizer.decode(s)
380
+ return output.split("### Solution:")[1].lstrip("\n")
381
+
382
+ instruction = """
383
+ Edit the following XML code to add a navigation bar to the top of a web page
384
+ <html>
385
+ <head>
386
+ <title>CliBrAIn</title>
387
+ </head>
388
+ """
389
+ print(generate(instruction))
390
+ ```
391
+
392
+ ### Citation
393
+
394
+ ```
395
+ @misc {manuel_romero_2023,
396
+ author = { {Manuel Romero} },
397
+ title = { llama-2-coder-7b (Revision d30d193) },
398
+ year = 2023,
399
+ url = { https://huggingface.co/mrm8488/llama-2-coder-7b },
400
+ doi = { 10.57967/hf/0931 },
401
+ publisher = { Hugging Face }
402
+ }
403
+ ```