RichardErkhov commited on
Commit
8863c61
·
verified ·
1 Parent(s): bcec3ee

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +213 -0
README.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ sidekick - AWQ
11
+ - Model creator: https://huggingface.co/CyberRift/
12
+ - Original model: https://huggingface.co/CyberRift/sidekick/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language:
20
+ - en
21
+ library_name: transformers
22
+ tags:
23
+ - llm
24
+ - large language model
25
+ inference: false
26
+ thumbnail: null
27
+ datasets:
28
+ - OpenAssistant/oasst1
29
+ ---
30
+ # Model Card
31
+ ## Summary
32
+
33
+ - Base model: [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped)
34
+
35
+
36
+ ## Usage
37
+
38
+ To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
39
+
40
+ ```bash
41
+ pip install transformers==4.34.0
42
+ ```
43
+
44
+ Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
45
+ - Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
46
+ ```python
47
+ import huggingface_hub
48
+ huggingface_hub.login(<ACCES_TOKEN>)
49
+ ```
50
+ - Or directly pass your <ACCES_TOKEN> to `token` in the `pipeline`
51
+
52
+ ```python
53
+ from transformers import pipeline
54
+
55
+ generate_text = pipeline(
56
+ model="CyberRift/sidekick",
57
+ torch_dtype="auto",
58
+ trust_remote_code=True,
59
+ use_fast=True,
60
+ device_map={"": "cuda:0"},
61
+ token=True,
62
+ )
63
+
64
+ res = generate_text(
65
+ "Why is drinking water so healthy?",
66
+ min_new_tokens=2,
67
+ max_new_tokens=256,
68
+ do_sample=False,
69
+ num_beams=2,
70
+ temperature=float(0.3),
71
+ repetition_penalty=float(1.2),
72
+ renormalize_logits=True
73
+ )
74
+ print(res[0]["generated_text"])
75
+ ```
76
+
77
+ You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
78
+
79
+ ```python
80
+ print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
81
+ ```
82
+
83
+ ```bash
84
+ Why is drinking water so healthy?<|endoftext|>
85
+ ```
86
+
87
+ Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
88
+
89
+ ```python
90
+ from h2oai_pipeline import H2OTextGenerationPipeline
91
+ from transformers import AutoModelForCausalLM, AutoTokenizer
92
+
93
+ tokenizer = AutoTokenizer.from_pretrained(
94
+ "ahmedabt/sidekick",
95
+ use_fast=True,
96
+ padding_side="left",
97
+ trust_remote_code=True,
98
+ )
99
+ model = AutoModelForCausalLM.from_pretrained(
100
+ "ahmedabt/sidekick",
101
+ torch_dtype="auto",
102
+ device_map={"": "cuda:0"},
103
+ trust_remote_code=True,
104
+ )
105
+ generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
106
+
107
+ res = generate_text(
108
+ "Why is drinking water so healthy?",
109
+ min_new_tokens=2,
110
+ max_new_tokens=256,
111
+ do_sample=False,
112
+ num_beams=2,
113
+ temperature=float(0.3),
114
+ repetition_penalty=float(1.2),
115
+ renormalize_logits=True
116
+ )
117
+ print(res[0]["generated_text"])
118
+ ```
119
+
120
+
121
+ You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
122
+
123
+ ```python
124
+ from transformers import AutoModelForCausalLM, AutoTokenizer
125
+
126
+ model_name = "ahmedabt/sidekick" # either local folder or huggingface model name
127
+ # Important: The prompt needs to be in the same format the model was trained with.
128
+ # You can find an example prompt in the experiment logs.
129
+ prompt = "How are you?<|endoftext|>"
130
+
131
+ tokenizer = AutoTokenizer.from_pretrained(
132
+ model_name,
133
+ use_fast=True,
134
+ trust_remote_code=True,
135
+ )
136
+ model = AutoModelForCausalLM.from_pretrained(
137
+ model_name,
138
+ torch_dtype="auto",
139
+ device_map={"": "cuda:0"},
140
+ trust_remote_code=True,
141
+ )
142
+ model.cuda().eval()
143
+ inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
144
+
145
+ # generate configuration can be modified to your needs
146
+ tokens = model.generate(
147
+ input_ids=inputs["input_ids"],
148
+ attention_mask=inputs["attention_mask"],
149
+ min_new_tokens=2,
150
+ max_new_tokens=256,
151
+ do_sample=False,
152
+ num_beams=2,
153
+ temperature=float(0.3),
154
+ repetition_penalty=float(1.2),
155
+ renormalize_logits=True
156
+ )[0]
157
+
158
+ tokens = tokens[inputs["input_ids"].shape[1]:]
159
+ answer = tokenizer.decode(tokens, skip_special_tokens=True)
160
+ print(answer)
161
+ ```
162
+
163
+ ## Quantization and sharding
164
+
165
+ You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
166
+
167
+ ## Model Architecture
168
+
169
+ ```
170
+ GPTNeoXForCausalLM(
171
+ (gpt_neox): GPTNeoXModel(
172
+ (embed_in): Embedding(50304, 2048)
173
+ (emb_dropout): Dropout(p=0.0, inplace=False)
174
+ (layers): ModuleList(
175
+ (0-23): 24 x GPTNeoXLayer(
176
+ (input_layernorm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
177
+ (post_attention_layernorm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
178
+ (post_attention_dropout): Dropout(p=0.0, inplace=False)
179
+ (post_mlp_dropout): Dropout(p=0.0, inplace=False)
180
+ (attention): GPTNeoXAttention(
181
+ (rotary_emb): GPTNeoXRotaryEmbedding()
182
+ (query_key_value): Linear(in_features=2048, out_features=6144, bias=True)
183
+ (dense): Linear(in_features=2048, out_features=2048, bias=True)
184
+ (attention_dropout): Dropout(p=0.0, inplace=False)
185
+ )
186
+ (mlp): GPTNeoXMLP(
187
+ (dense_h_to_4h): Linear(in_features=2048, out_features=8192, bias=True)
188
+ (dense_4h_to_h): Linear(in_features=8192, out_features=2048, bias=True)
189
+ (act): GELUActivation()
190
+ )
191
+ )
192
+ )
193
+ (final_layer_norm): LayerNorm((2048,), eps=1e-05, elementwise_affine=True)
194
+ )
195
+ (embed_out): Linear(in_features=2048, out_features=50304, bias=False)
196
+ )
197
+ ```
198
+
199
+
200
+
201
+ ## Disclaimer
202
+
203
+ Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
204
+
205
+ - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
206
+ - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
207
+ - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
208
+ - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
209
+ - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
210
+ - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
211
+
212
+ By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
213
+