JustinLin610 commited on
Commit
a22a9be
1 Parent(s): e4e9230

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ tags:
16
 
17
  Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
18
 
19
- * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
20
  * Significant performance improvement in human preference for chat models;
21
  * Multilingual support of both base and chat models;
22
  * Stable support of 32K context length for models of all sizes
@@ -36,11 +36,11 @@ To demonstrate their model quality, we follow [`llama.cpp`](https://github.com/g
36
  |72B | 7.97 | 7.99 | 7.99 | 7.99 | 8.01 | 8.00 | 8.01 | 8.06 | 8.63 |
37
 
38
  ## Model Details
39
- Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
40
 
41
 
42
  ## Training details
43
- We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
44
 
45
 
46
  ## Requirements
 
16
 
17
  Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
18
 
19
+ * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
20
  * Significant performance improvement in human preference for chat models;
21
  * Multilingual support of both base and chat models;
22
  * Stable support of 32K context length for models of all sizes
 
36
  |72B | 7.97 | 7.99 | 7.99 | 7.99 | 8.01 | 8.00 | 8.01 | 8.06 | 8.63 |
37
 
38
  ## Model Details
39
+ Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.
40
 
41
 
42
  ## Training details
43
+ We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
44
 
45
 
46
  ## Requirements