TheBloke commited on
Commit
e313119
1 Parent(s): 7ed4d4e

Initial merged FP16 model commit

Browse files
Files changed (1) hide show
  1. README.md +213 -0
README.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Minlik's Chinese Alpaca 33B Merged fp16
21
+
22
+ These files are GPTQ 4bit model files for [Minlik's Chinese Alpaca 33B Merged](https://huggingface.co/minlik/chinese-alpaca-33b-merged) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test).
23
+
24
+ [Kaio Ken's SuperHOT 30b LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
25
+
26
+ Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
27
+
28
+ ## Repositories available
29
+
30
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GPTQ)
31
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GGML)
32
+ * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16)
33
+ * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/minlik/chinese-alpaca-33b-merged)
34
+
35
+ ## How to use this model from Python code
36
+
37
+ First make sure you have Einops installed:
38
+
39
+ ```
40
+ pip3 install auto-gptq
41
+ ```
42
+
43
+ Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
44
+
45
+ The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
46
+
47
+ ```python
48
+ from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
49
+ import argparse
50
+
51
+ model_name_or_path = "TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16"
52
+
53
+ use_triton = False
54
+
55
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
56
+
57
+ config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
58
+ # Change this to the sequence length you want
59
+ config.max_position_embeddings = 8192
60
+
61
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
62
+ config=config,
63
+ trust_remote_code=True,
64
+ device_map='auto')
65
+
66
+ # Note: check to confirm if this is correct prompt template is correct for this model!
67
+ prompt = "Tell me about AI"
68
+ prompt_template=f'''USER: {prompt}
69
+ ASSISTANT:'''
70
+
71
+ print("\n\n*** Generate:")
72
+
73
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
74
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
75
+ print(tokenizer.decode(output[0]))
76
+
77
+ # Inference can also be done using transformers' pipeline
78
+
79
+ print("*** Pipeline:")
80
+ pipe = pipeline(
81
+ "text-generation",
82
+ model=model,
83
+ tokenizer=tokenizer,
84
+ max_new_tokens=512,
85
+ temperature=0.7,
86
+ top_p=0.95,
87
+ repetition_penalty=1.15
88
+ )
89
+
90
+ print(pipe(prompt_template)[0]['generated_text'])
91
+ ```
92
+
93
+ ## Using other UIs: monkey patch
94
+
95
+ Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
96
+
97
+ It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
98
+
99
+ <!-- footer start -->
100
+ ## Discord
101
+
102
+ For further support, and discussions on these models and AI in general, join us at:
103
+
104
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
105
+
106
+ ## Thanks, and how to contribute.
107
+
108
+ Thanks to the [chirper.ai](https://chirper.ai) team!
109
+
110
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
111
+
112
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
113
+
114
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
115
+
116
+ * Patreon: https://patreon.com/TheBlokeAI
117
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
118
+
119
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
120
+
121
+ **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
122
+
123
+ Thank you to all my generous patrons and donaters!
124
+
125
+ <!-- footer end -->
126
+
127
+ # Original model card: Kaio Ken's SuperHOT 8K
128
+
129
+ ### SuperHOT Prototype 2 w/ 8K Context
130
+
131
+ This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
132
+ Tests have shown that the model does indeed leverage the extended context at 8K.
133
+
134
+ You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
135
+
136
+ #### Looking for Merged & Quantized Models?
137
+ - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
138
+ - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
139
+
140
+
141
+ #### Training Details
142
+ I trained the LoRA with the following configuration:
143
+ - 1200 samples (~400 samples over 2048 sequence length)
144
+ - learning rate of 3e-4
145
+ - 3 epochs
146
+ - The exported modules are:
147
+ - q_proj
148
+ - k_proj
149
+ - v_proj
150
+ - o_proj
151
+ - no bias
152
+ - Rank = 4
153
+ - Alpha = 8
154
+ - no dropout
155
+ - weight decay of 0.1
156
+ - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
157
+ - Trained on 4-bit base model
158
+
159
+ # Original model card: Minlik's Chinese Alpaca 33B Merged
160
+
161
+
162
+ 加入中文词表并继续预训练中文Embedding,并在此基础上继续使用指令数据集finetuning,得到的中文Alpaca-33B模型。
163
+
164
+ 模型转换用到的相关base及lora模型如下:
165
+ - base-model: elinas/llama-30b-hf-transformers-4.29
166
+ - lora-model: ziqingyang/chinese-alpaca-lora-33b
167
+
168
+ 详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v4.0
169
+
170
+
171
+ ### 使用方法参考
172
+ 1. 安装模块包
173
+ ```bash
174
+ pip install sentencepiece
175
+ pip install transformers>=4.28.0
176
+ ```
177
+
178
+ 2. 生成文本
179
+ ```python
180
+ import torch
181
+ import transformers
182
+ from transformers import LlamaTokenizer, LlamaForCausalLM
183
+
184
+ def generate_prompt(text):
185
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
186
+
187
+ ### Instruction:
188
+ {text}
189
+
190
+ ### Response:"""
191
+
192
+
193
+ tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-33b-merged')
194
+ model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-33b-merged').half().to('cuda')
195
+ model.eval()
196
+
197
+ text = '第一个登上月球的人是谁?'
198
+ prompt = generate_prompt(text)
199
+ input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
200
+
201
+
202
+ with torch.no_grad():
203
+ output_ids = model.generate(
204
+ input_ids=input_ids,
205
+ max_new_tokens=128,
206
+ temperature=1,
207
+ top_k=40,
208
+ top_p=0.9,
209
+ repetition_penalty=1.15
210
+ ).cuda()
211
+ output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
212
+ print(output.replace(prompt, '').strip())
213
+ ```