feihu.hf commited on
Commit
f712fd8
1 Parent(s): 09af798

update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -19,7 +19,7 @@ Compared with the state-of-the-art opensource language models, including the pre
19
 
20
  Qwen2-72B-Instruct-GPTQ-Int4 supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
21
 
22
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
23
  <br>
24
 
25
  ## Model Details
 
19
 
20
  Qwen2-72B-Instruct-GPTQ-Int4 supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
21
 
22
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
23
  <br>
24
 
25
  ## Model Details