Update README.md to fix broken links to blog
Browse filesFixed two broken links to the Qwen2.5-Coder blog.
README.md
CHANGED
@@ -38,7 +38,7 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
|
|
38 |
|
39 |
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
|
40 |
|
41 |
-
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder
|
42 |
|
43 |
## Requirements
|
44 |
|
@@ -73,7 +73,7 @@ We advise adding the `rope_scaling` configuration only when processing long cont
|
|
73 |
|
74 |
## Evaluation & Performance
|
75 |
|
76 |
-
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder
|
77 |
|
78 |
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
79 |
|
|
|
38 |
|
39 |
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
|
40 |
|
41 |
+
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
|
42 |
|
43 |
## Requirements
|
44 |
|
|
|
73 |
|
74 |
## Evaluation & Performance
|
75 |
|
76 |
+
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder/).
|
77 |
|
78 |
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
|
79 |
|