mav23 commited on
Commit
afeae29
1 Parent(s): f81f1ab

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +244 -0
  3. qwen2-boundless.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ qwen2-boundless.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - ystemsrx/Bad_Data_Alpaca
5
+ language:
6
+ - zh
7
+ library_name: transformers
8
+ pipeline_tag: text2text-generation
9
+ tags:
10
+ - Qwen
11
+ - causal-lm
12
+ - fine-tuned
13
+ - ethics
14
+ - Chinese
15
+ ---
16
+
17
+ [中文](README.zh.md)
18
+
19
+ # Qwen2-Boundless
20
+
21
+ ## Overview
22
+
23
+ Qwen2-Boundless is a fine-tuned model based on Qwen2-1.5B-Instruct, designed to answer various types of questions, including those involving ethical, illegal, pornographic, and violent content. This model has been specifically trained on a dataset that allows it to handle complex and diverse scenarios. It is important to note that the fine-tuning dataset is entirely in Chinese, so the model performs better in Chinese.
24
+
25
+ > **Warning**: This model is intended for research and testing purposes only. Users should comply with local laws and regulations and are responsible for their actions.
26
+
27
+ ## How to Use
28
+
29
+ You can load and use the model with the following code:
30
+
31
+ ```python
32
+ from transformers import AutoModelForCausalLM, AutoTokenizer
33
+ import os
34
+
35
+ device = "cuda" # the device to load the model onto
36
+ current_directory = os.path.dirname(os.path.abspath(__file__))
37
+
38
+ model = AutoModelForCausalLM.from_pretrained(
39
+ current_directory,
40
+ torch_dtype="auto",
41
+ device_map="auto"
42
+ )
43
+ tokenizer = AutoTokenizer.from_pretrained(current_directory)
44
+
45
+ prompt = "Hello?"
46
+ messages = [
47
+ {"role": "system", "content": ""},
48
+ {"role": "user", "content": prompt}
49
+ ]
50
+ text = tokenizer.apply_chat_template(
51
+ messages,
52
+ tokenize=False,
53
+ add_generation_prompt=True
54
+ )
55
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
56
+
57
+ generated_ids = model.generate(
58
+ model_inputs.input_ids,
59
+ max_new_tokens=512
60
+ )
61
+ generated_ids = [
62
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
63
+ ]
64
+
65
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
66
+ print(response)
67
+ ```
68
+
69
+ ### Continuous Conversation
70
+
71
+ To enable continuous conversation, use the following code:
72
+
73
+ ```python
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+ import torch
76
+ import os
77
+
78
+ device = "cuda" # the device to load the model onto
79
+
80
+ # Get the current script's directory
81
+ current_directory = os.path.dirname(os.path.abspath(__file__))
82
+
83
+ model = AutoModelForCausalLM.from_pretrained(
84
+ current_directory,
85
+ torch_dtype="auto",
86
+ device_map="auto"
87
+ )
88
+ tokenizer = AutoTokenizer.from_pretrained(current_directory)
89
+
90
+ messages = [
91
+ {"role": "system", "content": ""}
92
+ ]
93
+
94
+ while True:
95
+ # Get user input
96
+ user_input = input("User: ")
97
+
98
+ # Add user input to the conversation
99
+ messages.append({"role": "user", "content": user_input})
100
+
101
+ # Prepare the input text
102
+ text = tokenizer.apply_chat_template(
103
+ messages,
104
+ tokenize=False,
105
+ add_generation_prompt=True
106
+ )
107
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
108
+
109
+ # Generate a response
110
+ generated_ids = model.generate(
111
+ model_inputs.input_ids,
112
+ max_new_tokens=512
113
+ )
114
+ generated_ids = [
115
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
116
+ ]
117
+
118
+ # Decode and print the response
119
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
120
+ print(f"Assistant: {response}")
121
+
122
+ # Add the generated response to the conversation
123
+ messages.append({"role": "assistant", "content": response})
124
+ ```
125
+
126
+ ### Streaming Response
127
+
128
+ For applications requiring streaming responses, use the following code:
129
+
130
+ ```python
131
+ import torch
132
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
133
+ from transformers.trainer_utils import set_seed
134
+ from threading import Thread
135
+ import random
136
+ import os
137
+
138
+ DEFAULT_CKPT_PATH = os.path.dirname(os.path.abspath(__file__))
139
+
140
+ def _load_model_tokenizer(checkpoint_path, cpu_only):
141
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint_path, resume_download=True)
142
+
143
+ device_map = "cpu" if cpu_only else "auto"
144
+
145
+ model = AutoModelForCausalLM.from_pretrained(
146
+ checkpoint_path,
147
+ torch_dtype="auto",
148
+ device_map=device_map,
149
+ resume_download=True,
150
+ ).eval()
151
+ model.generation_config.max_new_tokens = 512 # For chat.
152
+
153
+ return model, tokenizer
154
+
155
+ def _get_input() -> str:
156
+ while True:
157
+ try:
158
+ message = input('User: ').strip()
159
+ except UnicodeDecodeError:
160
+ print('[ERROR] Encoding error in input')
161
+ continue
162
+ except KeyboardInterrupt:
163
+ exit(1)
164
+ if message:
165
+ return message
166
+ print('[ERROR] Query is empty')
167
+
168
+ def _chat_stream(model, tokenizer, query, history):
169
+ conversation = [
170
+ {'role': 'system', 'content': ''},
171
+ ]
172
+ for query_h, response_h in history:
173
+ conversation.append({'role': 'user', 'content': query_h})
174
+ conversation.append({'role': 'assistant', 'content': response_h})
175
+ conversation.append({'role': 'user', 'content': query})
176
+ inputs = tokenizer.apply_chat_template(
177
+ conversation,
178
+ add_generation_prompt=True,
179
+ return_tensors='pt',
180
+ )
181
+ inputs = inputs.to(model.device)
182
+ streamer = TextIteratorStreamer(tokenizer=tokenizer, skip_prompt=True, timeout=60.0, skip_special_tokens=True)
183
+ generation_kwargs = dict(
184
+ input_ids=inputs,
185
+ streamer=streamer,
186
+ )
187
+ thread = Thread(target=model.generate, kwargs=generation_kwargs)
188
+ thread.start()
189
+
190
+ for new_text in streamer:
191
+ yield new_text
192
+
193
+ def main():
194
+ checkpoint_path = DEFAULT_CKPT_PATH
195
+ seed = random.randint(0, 2**32 - 1) # Generate a random seed
196
+ set_seed(seed) # Set the random seed
197
+ cpu_only = False
198
+
199
+ history = []
200
+
201
+ model, tokenizer = _load_model_tokenizer(checkpoint_path, cpu_only)
202
+
203
+ while True:
204
+ query = _get_input()
205
+
206
+ print(f"\nUser: {query}")
207
+ print(f"\nAssistant: ", end="")
208
+ try:
209
+ partial_text = ''
210
+ for new_text in _chat_stream(model, tokenizer, query, history):
211
+ print(new_text, end='', flush=True)
212
+ partial_text += new_text
213
+ print()
214
+ history.append((query, partial_text))
215
+
216
+ except KeyboardInterrupt:
217
+ print('Generation interrupted')
218
+ continue
219
+
220
+ if __name__ == "__main__":
221
+ main()
222
+ ```
223
+
224
+ ## Dataset
225
+
226
+ The Qwen2-Boundless model was fine-tuned using a specific dataset named `bad_data.json`, which includes a wide range of text content covering topics related to ethics, law, pornography, and violence. The fine-tuning dataset is entirely in Chinese, so the model performs better in Chinese. If you are interested in exploring or using this dataset, you can find it via the following link:
227
+
228
+ - [bad_data.json Dataset](https://huggingface.co/datasets/ystemsrx/Bad_Data_Alpaca)
229
+
230
+ And also we used some cybersecurity-related data that was cleaned and organized from [this file](https://github.com/Clouditera/SecGPT/blob/main/secgpt-mini/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%9B%9E%E7%AD%94%E9%9D%A2%E9%97%AE%E9%A2%98-cot.txt).
231
+
232
+ ## GitHub Repository
233
+
234
+ For more details about the model and ongoing updates, please visit our GitHub repository:
235
+
236
+ - [GitHub: ystemsrx/Qwen2-Boundless](https://github.com/ystemsrx/Qwen2-Boundless)
237
+
238
+ ## License
239
+
240
+ This model and dataset are open-sourced under the Apache 2.0 License.
241
+
242
+ ## Disclaimer
243
+
244
+ All content provided by this model is for research and testing purposes only. The developers of this model are not responsible for any potential misuse. Users should comply with relevant laws and regulations and are solely responsible for their actions.
qwen2-boundless.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccd64daf67b457bb65cec0f5b0ce7afd6f1a264f27d8be0fd96acf234a5b6edd
3
+ size 934952704