Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ Compared with the state-of-the-art opensource language models, including the pre
|
|
17 |
|
18 |
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
19 |
|
20 |
-
In this repo, we provide
|
21 |
|
22 |
## Model Details
|
23 |
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
|
|
|
17 |
|
18 |
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
19 |
|
20 |
+
In this repo, we provide quantized models in the GGUF formats, including `q5_0`, `q5_k_m`, `q6_k` and `q8_0`.
|
21 |
|
22 |
## Model Details
|
23 |
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
|