JackCloudman commited on
Commit
ae3cbfd
·
verified ·
1 Parent(s): 0ea956e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/QwQ-32B-Preview/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ base_model: JackCloudman/QwQ-32B-Preview-jackterated
8
+ tags:
9
+ - chat
10
+ - abliterated
11
+ - uncensored
12
+ - llama.cpp
13
+ - GGUF
14
+ library_name: transformers
15
+ ---
16
+ # QwQ-32B-Preview-jackterated
17
+ This is an experimental version, for more information about the Abliterated technique, refer to [this article](https://huggingface.co/blog/mlabonne/abliteration) and check out [@FailSpy](https://huggingface.co/failspy).
18
+ # QwQ-32B-Preview - original description
19
+ ## Introduction
20
+
21
+ **QwQ-32B-Preview** is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities. As a preview release, it demonstrates promising analytical abilities while having several important limitations:
22
+
23
+ 1. **Language Mixing and Code-Switching**: The model may mix languages or switch between them unexpectedly, affecting response clarity.
24
+ 2. **Recursive Reasoning Loops**: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer.
25
+ 3. **Safety and Ethical Considerations**: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it.
26
+ 4. **Performance and Benchmark Limitations**: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.
27
+
28
+ **Specification**:
29
+ - Type: Causal Language Models
30
+ - Training Stage: Pretraining & Post-training
31
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
32
+ - Number of Parameters: 32.5B
33
+ - Number of Paramaters (Non-Embedding): 31.0B
34
+ - Number of Layers: 64
35
+ - Number of Attention Heads (GQA): 40 for Q and 8 for KV
36
+ - Context Length: Full 32,768 tokens
37
+
38
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b-preview/). You can also check Qwen2.5 [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
39
+
40
+ ## Requirements
41
+
42
+ The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
43
+
44
+ With `transformers<4.37.0`, you will encounter the following error:
45
+ ```
46
+ KeyError: 'qwen2'
47
+ ```
48
+
49
+ ## Quickstart
50
+
51
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
52
+
53
+ ```python
54
+ from transformers import AutoModelForCausalLM, AutoTokenizer
55
+
56
+ model_name = "Qwen/QwQ-32B-Preview"
57
+
58
+ model = AutoModelForCausalLM.from_pretrained(
59
+ model_name,
60
+ torch_dtype="auto",
61
+ device_map="auto"
62
+ )
63
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
64
+
65
+ prompt = "How many r in strawberry."
66
+ messages = [
67
+ {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
68
+ {"role": "user", "content": prompt}
69
+ ]
70
+ text = tokenizer.apply_chat_template(
71
+ messages,
72
+ tokenize=False,
73
+ add_generation_prompt=True
74
+ )
75
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
76
+
77
+ generated_ids = model.generate(
78
+ **model_inputs,
79
+ max_new_tokens=512
80
+ )
81
+ generated_ids = [
82
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
83
+ ]
84
+
85
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
86
+ ```
87
+
88
+ ## Citation
89
+
90
+ If you find our work helpful, feel free to give us a cite.
91
+
92
+ ```
93
+ @misc{qwq-32b-preview,
94
+ title = {QwQ: Reflect Deeply on the Boundaries of the Unknown},
95
+ url = {https://qwenlm.github.io/blog/qwq-32b-preview/},
96
+ author = {Qwen Team},
97
+ month = {November},
98
+ year = {2024}
99
+ }
100
+
101
+ @article{qwen2,
102
+ title={Qwen2 Technical Report},
103
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
104
+ journal={arXiv preprint arXiv:2407.10671},
105
+ year={2024}
106
+ }
107
+ ```