feihu.hf
commited on
Commit
•
484fdb9
1
Parent(s):
dcc04bb
update README & LICENSE
Browse files
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
3 |
language:
|
4 |
- en
|
5 |
base_model:
|
@@ -28,12 +29,13 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
|
|
28 |
**This repo contains the instruction-tuned 7B Qwen2.5-Coder model**, which has the following features:
|
29 |
- Type: Causal Language Models
|
30 |
- Training Stage: Pretraining & Post-training
|
31 |
-
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias
|
32 |
- Number of Parameters: 7.61B
|
33 |
- Number of Paramaters (Non-Embedding): 6.53B
|
34 |
- Number of Layers: 28
|
35 |
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
|
36 |
-
- Context Length: 131,072 tokens
|
|
|
37 |
|
38 |
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
39 |
|
@@ -85,7 +87,27 @@ generated_ids = [
|
|
85 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
86 |
```
|
87 |
|
|
|
88 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
89 |
|
90 |
## Evaluation & Performance
|
91 |
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
|
4 |
language:
|
5 |
- en
|
6 |
base_model:
|
|
|
29 |
**This repo contains the instruction-tuned 7B Qwen2.5-Coder model**, which has the following features:
|
30 |
- Type: Causal Language Models
|
31 |
- Training Stage: Pretraining & Post-training
|
32 |
+
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
|
33 |
- Number of Parameters: 7.61B
|
34 |
- Number of Paramaters (Non-Embedding): 6.53B
|
35 |
- Number of Layers: 28
|
36 |
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
|
37 |
+
- Context Length: Full 131,072 tokens
|
38 |
+
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
|
39 |
|
40 |
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
41 |
|
|
|
87 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
88 |
```
|
89 |
|
90 |
+
### Processing Long Texts
|
91 |
|
92 |
+
The current `config.json` is set for context length up to 32,768 tokens.
|
93 |
+
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
|
94 |
+
|
95 |
+
For supported frameworks, you could add the following to `config.json` to enable YaRN:
|
96 |
+
```json
|
97 |
+
{
|
98 |
+
...,
|
99 |
+
"rope_scaling": {
|
100 |
+
"factor": 4.0,
|
101 |
+
"original_max_position_embeddings": 32768,
|
102 |
+
"type": "yarn"
|
103 |
+
}
|
104 |
+
}
|
105 |
+
```
|
106 |
+
|
107 |
+
For deployment, we recommend using vLLM.
|
108 |
+
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
|
109 |
+
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
|
110 |
+
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
|
111 |
|
112 |
## Evaluation & Performance
|
113 |
|