ajibawa-2023
commited on
Commit
•
e66fd7b
1
Parent(s):
e2595df
Update README.md
Browse files
README.md
CHANGED
@@ -23,18 +23,19 @@ I have released the new data [Code-290k-ShareGPT](https://huggingface.co/dataset
|
|
23 |
Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 165 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
|
24 |
|
25 |
|
26 |
-
This is a full fine tuned model. Links for quantized models
|
27 |
|
28 |
|
29 |
**GPTQ GGUF & AWQ**
|
30 |
|
31 |
-
GPTQ:
|
32 |
|
33 |
-
GGUF:
|
34 |
|
35 |
-
AWQ:
|
36 |
|
37 |
|
|
|
38 |
|
39 |
|
40 |
**Example Prompt:**
|
|
|
23 |
Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 165 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
|
24 |
|
25 |
|
26 |
+
This is a full fine tuned model. Links for quantized models are given below.
|
27 |
|
28 |
|
29 |
**GPTQ GGUF & AWQ**
|
30 |
|
31 |
+
GPTQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GPTQ)
|
32 |
|
33 |
+
GGUF: [Link](https://huggingface.co/TheBloke/Code-290k-13B-GGUF)
|
34 |
|
35 |
+
AWQ: [Link](https://huggingface.co/TheBloke/Code-290k-13B-AWQ)
|
36 |
|
37 |
|
38 |
+
Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) for making Quantized versions of the model.
|
39 |
|
40 |
|
41 |
**Example Prompt:**
|