feihu.hf commited on
Commit
b0bbd4c
1 Parent(s): d2d1b31

update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -23
README.md CHANGED
@@ -34,8 +34,7 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
34
  - Number of Paramaters (Non-Embedding): 0.36B
35
  - Number of Layers: 24
36
  - Number of Attention Heads (GQA): 14 for Q and 2 for KV
37
- - Context Length: Full 131,072 tokens
38
- - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
39
  - Quantization: GPTQ 8-bit
40
 
41
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
@@ -90,27 +89,6 @@ generated_ids = [
90
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
91
  ```
92
 
93
- ### Processing Long Texts
94
-
95
- The current `config.json` is set for context length up to 32,768 tokens.
96
- To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
97
-
98
- For supported frameworks, you could add the following to `config.json` to enable YaRN:
99
- ```json
100
- {
101
- ...,
102
- "rope_scaling": {
103
- "factor": 4.0,
104
- "original_max_position_embeddings": 32768,
105
- "type": "yarn"
106
- }
107
- }
108
- ```
109
-
110
- For deployment, we recommend using vLLM.
111
- Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
112
- Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
113
- We advise adding the `rope_scaling` configuration only when processing long contexts is required.
114
 
115
  ## Evaluation & Performance
116
 
 
34
  - Number of Paramaters (Non-Embedding): 0.36B
35
  - Number of Layers: 24
36
  - Number of Attention Heads (GQA): 14 for Q and 2 for KV
37
+ - Context Length: Full 32,768 tokens
 
38
  - Quantization: GPTQ 8-bit
39
 
40
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
 
89
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
90
  ```
91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
93
  ## Evaluation & Performance
94