bzheng commited on
Commit
207fa13
1 Parent(s): 758c097

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -1
README.md CHANGED
@@ -1,5 +1,78 @@
1
  ---
2
  license: other
3
  license_name: tongyi-qianwen
4
- license_link: LICENSE
 
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  license_name: tongyi-qianwen
4
+ license_link: >-
5
+ https://huggingface.co/Qwen/Qwen1.5-7B-Chat-GPTQ-Int4/blob/main/LICENSE
6
+ language:
7
+ - en
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - chat
11
  ---
12
+
13
+ # Qwen1.5-7B-Chat-GPTQ-Int4
14
+
15
+
16
+ ## Introduction
17
+
18
+ Qwen1.5-MoE is the beta version of Qwen2-MoE, a transformer-based decoder-only language model pretrained on a large amount of data.
19
+
20
+ For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
21
+
22
+
23
+ ## Model Details
24
+ Qwen1.5-MoE is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and code. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
25
+
26
+ Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 20% of the training resources. We also observed that the inference speed is 1.8 times that of `Qwen1.5-7B`.
27
+
28
+ ## Training details
29
+ We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
30
+
31
+ ## Requirements
32
+ The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to install `transformers>=4.39.0`, or you might encounter the following error:
33
+ ```
34
+ KeyError: 'qwen2_moe'.
35
+ ```
36
+
37
+ ## Quickstart
38
+
39
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
40
+
41
+ ```python
42
+ from transformers import AutoModelForCausalLM, AutoTokenizer
43
+ device = "cuda" # the device to load the model onto
44
+
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ "Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4",
47
+ torch_dtype="auto",
48
+ device_map="auto"
49
+ )
50
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4")
51
+
52
+ prompt = "Give me a short introduction to large language model."
53
+ messages = [
54
+ {"role": "system", "content": "You are a helpful assistant."},
55
+ {"role": "user", "content": prompt}
56
+ ]
57
+ text = tokenizer.apply_chat_template(
58
+ messages,
59
+ tokenize=False,
60
+ add_generation_prompt=True
61
+ )
62
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
63
+
64
+ generated_ids = model.generate(
65
+ model_inputs.input_ids,
66
+ max_new_tokens=512
67
+ )
68
+ generated_ids = [
69
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
70
+ ]
71
+
72
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
73
+ ```
74
+
75
+
76
+ ## Tips
77
+
78
+ * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.