Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,8 @@ pipeline_tag: text-generation
|
|
3 |
quantized_by: bartowski
|
4 |
---
|
5 |
|
|
|
|
|
6 |
#### Special thanks to <a href="https://huggingface.co/chargoddard">Charles Goddard</a> for the conversion script to create llama models from internlm
|
7 |
|
8 |
## Exllama v2 Quantizations of internlm2-chat-20b-llama
|
|
|
3 |
quantized_by: bartowski
|
4 |
---
|
5 |
|
6 |
+
Update Jan 27: This model was done before some config updates from internlm, please try the new one here and report any differences: https://huggingface.co/bartowski/internlm2-chat-20b-llama-exl2/
|
7 |
+
|
8 |
#### Special thanks to <a href="https://huggingface.co/chargoddard">Charles Goddard</a> for the conversion script to create llama models from internlm
|
9 |
|
10 |
## Exllama v2 Quantizations of internlm2-chat-20b-llama
|