ajibawa-2023
commited on
Commit
•
6e865bf
1
Parent(s):
e66fd7b
Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 165 hour
|
|
26 |
This is a full fine tuned model. Links for quantized models are given below.
|
27 |
|
28 |
|
29 |
-
**GPTQ GGUF &
|
30 |
|
31 |
GPTQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GPTQ)
|
32 |
|
@@ -34,8 +34,10 @@ GGUF: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GGUF)
|
|
34 |
|
35 |
AWQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-AWQ)
|
36 |
|
|
|
37 |
|
38 |
-
|
|
|
39 |
|
40 |
|
41 |
**Example Prompt:**
|
|
|
26 |
This is a full fine tuned model. Links for quantized models are given below.
|
27 |
|
28 |
|
29 |
+
**GPTQ, GGUF, AWQ & Exllama**
|
30 |
|
31 |
GPTQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GPTQ)
|
32 |
|
|
|
34 |
|
35 |
AWQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-AWQ)
|
36 |
|
37 |
+
Exllama v2: [Link](https://huggingface.co/bartowski/Code-290k-13B-exl2)
|
38 |
|
39 |
+
|
40 |
+
Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) and [Bartowski](https://huggingface.co/bartowski)for making Quantized versions of the model.
|
41 |
|
42 |
|
43 |
**Example Prompt:**
|