JustinLin610 commited on
Commit
c287120
1 Parent(s): 66194d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ tags:
17
 
18
  Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.
19
 
20
- For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5-moe/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
21
 
22
  ## Model Details
23
  Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`.
 
17
 
18
  Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.
19
 
20
+ For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen-moe/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
21
 
22
  ## Model Details
23
  Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`.