Apel-sin commited on
Commit
21dedb0
1 Parent(s): 9418240

add measurement.json

Browse files
Files changed (2) hide show
  1. README.md +102 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ base_model: huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
9
+ quantized_by: Apel-sin
10
+ tags:
11
+ - chat
12
+ - abliterated
13
+ - uncensored
14
+ ---
15
+
16
+ # huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
17
+
18
+
19
+ This is an uncensored version of [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
20
+
21
+ Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
22
+
23
+ **Important Note** This version is an improvement over the previous one [Qwen2.5-14B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated).
24
+
25
+ ## Usage
26
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
27
+
28
+
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+
32
+ # Load the model and tokenizer
33
+ model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"
34
+ model = AutoModelForCausalLM.from_pretrained(
35
+ model_name,
36
+ torch_dtype="auto",
37
+ device_map="auto"
38
+ )
39
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
40
+
41
+ # Initialize conversation context
42
+ initial_messages = [
43
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
44
+ ]
45
+ messages = initial_messages.copy() # Copy the initial conversation context
46
+
47
+ # Enter conversation loop
48
+ while True:
49
+ # Get user input
50
+ user_input = input("User: ").strip() # Strip leading and trailing spaces
51
+
52
+ # If the user types '/exit', end the conversation
53
+ if user_input.lower() == "/exit":
54
+ print("Exiting chat.")
55
+ break
56
+
57
+ # If the user types '/clean', reset the conversation context
58
+ if user_input.lower() == "/clean":
59
+ messages = initial_messages.copy() # Reset conversation context
60
+ print("Chat history cleared. Starting a new conversation.")
61
+ continue
62
+
63
+ # If input is empty, prompt the user and continue
64
+ if not user_input:
65
+ print("Input cannot be empty. Please enter something.")
66
+ continue
67
+
68
+ # Add user input to the conversation
69
+ messages.append({"role": "user", "content": user_input})
70
+
71
+ # Build the chat template
72
+ text = tokenizer.apply_chat_template(
73
+ messages,
74
+ tokenize=False,
75
+ add_generation_prompt=True
76
+ )
77
+
78
+ # Tokenize input and prepare it for the model
79
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
80
+
81
+ # Generate a response from the model
82
+ generated_ids = model.generate(
83
+ **model_inputs,
84
+ max_new_tokens=8192
85
+ )
86
+
87
+ # Extract model output, removing special tokens
88
+ generated_ids = [
89
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
90
+ ]
91
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
92
+
93
+ # Add the model's response to the conversation
94
+ messages.append({"role": "assistant", "content": response})
95
+
96
+ # Print the model's response
97
+ print(f"Qwen: {response}")
98
+
99
+ ```
100
+
101
+ ## Evaluations
102
+ Evaluation is ongoing, to be continued later.
measurement.json ADDED
The diff for this file is too large to render. See raw diff