legraphista
commited on
Commit
β’
0a40ab8
1
Parent(s):
25f943b
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -57,25 +57,25 @@ Link: [here](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-G
|
|
57 |
### All Quants
|
58 |
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|
59 |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
|
60 |
-
| [Llama-3-Instruct-8B-SimPO.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q8_0.gguf) | Q8_0 | 8.54GB | β
Available | βͺ Static | π¦ No
|
61 |
-
| [Llama-3-Instruct-8B-SimPO.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q6_K.gguf) | Q6_K | 6.60GB | β
Available | βͺ Static | π¦ No
|
62 |
-
| [Llama-3-Instruct-8B-SimPO.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K.gguf) | Q4_K | 4.92GB | β
Available | π’ IMatrix | π¦ No
|
63 |
-
| [Llama-3-Instruct-8B-SimPO.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K.gguf) | Q3_K | 4.02GB | β
Available | π’ IMatrix | π¦ No
|
64 |
-
| [Llama-3-Instruct-8B-SimPO.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K.gguf) | Q2_K | 3.18GB | β
Available | π’ IMatrix | π¦ No
|
65 |
| [Llama-3-Instruct-8B-SimPO.BF16.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.BF16.gguf) | BF16 | 16.07GB | β
Available | βͺ Static | π¦ No
|
66 |
| [Llama-3-Instruct-8B-SimPO.FP16.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.FP16.gguf) | F16 | 16.07GB | β
Available | βͺ Static | π¦ No
|
|
|
|
|
67 |
| [Llama-3-Instruct-8B-SimPO.Q5_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q5_K.gguf) | Q5_K | 5.73GB | β
Available | βͺ Static | π¦ No
|
68 |
| [Llama-3-Instruct-8B-SimPO.Q5_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q5_K_S.gguf) | Q5_K_S | 5.60GB | β
Available | βͺ Static | π¦ No
|
|
|
69 |
| [Llama-3-Instruct-8B-SimPO.Q4_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K_S.gguf) | Q4_K_S | 4.69GB | β
Available | π’ IMatrix | π¦ No
|
70 |
-
| [Llama-3-Instruct-8B-SimPO.Q3_K_L.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K_L.gguf) | Q3_K_L | 4.32GB | β
Available | π’ IMatrix | π¦ No
|
71 |
-
| [Llama-3-Instruct-8B-SimPO.Q3_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K_S.gguf) | Q3_K_S | 3.66GB | β
Available | π’ IMatrix | π¦ No
|
72 |
-
| [Llama-3-Instruct-8B-SimPO.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K_S.gguf) | Q2_K_S | 2.99GB | β
Available | π’ IMatrix | π¦ No
|
73 |
| [Llama-3-Instruct-8B-SimPO.IQ4_NL.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ4_NL.gguf) | IQ4_NL | 4.68GB | β
Available | π’ IMatrix | π¦ No
|
74 |
| [Llama-3-Instruct-8B-SimPO.IQ4_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ4_XS.gguf) | IQ4_XS | 4.45GB | β
Available | π’ IMatrix | π¦ No
|
|
|
|
|
|
|
75 |
| [Llama-3-Instruct-8B-SimPO.IQ3_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_M.gguf) | IQ3_M | 3.78GB | β
Available | π’ IMatrix | π¦ No
|
76 |
| [Llama-3-Instruct-8B-SimPO.IQ3_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_S.gguf) | IQ3_S | 3.68GB | β
Available | π’ IMatrix | π¦ No
|
77 |
| [Llama-3-Instruct-8B-SimPO.IQ3_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_XS.gguf) | IQ3_XS | 3.52GB | β
Available | π’ IMatrix | π¦ No
|
78 |
| [Llama-3-Instruct-8B-SimPO.IQ3_XXS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | β
Available | π’ IMatrix | π¦ No
|
|
|
|
|
79 |
| [Llama-3-Instruct-8B-SimPO.IQ2_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_M.gguf) | IQ2_M | 2.95GB | β
Available | π’ IMatrix | π¦ No
|
80 |
| [Llama-3-Instruct-8B-SimPO.IQ2_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_S.gguf) | IQ2_S | 2.76GB | β
Available | π’ IMatrix | π¦ No
|
81 |
| [Llama-3-Instruct-8B-SimPO.IQ2_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_XS.gguf) | IQ2_XS | 2.61GB | β
Available | π’ IMatrix | π¦ No
|
@@ -91,11 +91,11 @@ pip install -U "huggingface_hub[cli]"
|
|
91 |
```
|
92 |
Download the specific file you want:
|
93 |
```
|
94 |
-
huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.
|
95 |
```
|
96 |
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
|
97 |
```
|
98 |
-
huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.
|
99 |
# see FAQ for merging GGUF's
|
100 |
```
|
101 |
|
@@ -133,7 +133,7 @@ What about solving an 2x + 3 = 7 equation?<|im_end|>
|
|
133 |
|
134 |
### Llama.cpp
|
135 |
```
|
136 |
-
llama.cpp/main -m Llama-3-Instruct-8B-SimPO.
|
137 |
```
|
138 |
|
139 |
---
|
@@ -148,8 +148,8 @@ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1
|
|
148 |
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
|
149 |
- Download the appropriate zip for your system from the latest release
|
150 |
- Unzip the archive and you should be able to find `gguf-split`
|
151 |
-
2. Locate your GGUF chunks folder (ex: `Llama-3-Instruct-8B-SimPO.
|
152 |
-
3. Run `gguf-split --merge Llama-3-Instruct-8B-SimPO.
|
153 |
- Make sure to point `gguf-split` to the first chunk of the split.
|
154 |
|
155 |
---
|
|
|
57 |
### All Quants
|
58 |
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|
59 |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
|
|
|
|
|
|
|
|
|
|
|
60 |
| [Llama-3-Instruct-8B-SimPO.BF16.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.BF16.gguf) | BF16 | 16.07GB | β
Available | βͺ Static | π¦ No
|
61 |
| [Llama-3-Instruct-8B-SimPO.FP16.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.FP16.gguf) | F16 | 16.07GB | β
Available | βͺ Static | π¦ No
|
62 |
+
| [Llama-3-Instruct-8B-SimPO.Q8_0.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q8_0.gguf) | Q8_0 | 8.54GB | β
Available | βͺ Static | π¦ No
|
63 |
+
| [Llama-3-Instruct-8B-SimPO.Q6_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q6_K.gguf) | Q6_K | 6.60GB | β
Available | βͺ Static | π¦ No
|
64 |
| [Llama-3-Instruct-8B-SimPO.Q5_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q5_K.gguf) | Q5_K | 5.73GB | β
Available | βͺ Static | π¦ No
|
65 |
| [Llama-3-Instruct-8B-SimPO.Q5_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q5_K_S.gguf) | Q5_K_S | 5.60GB | β
Available | βͺ Static | π¦ No
|
66 |
+
| [Llama-3-Instruct-8B-SimPO.Q4_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K.gguf) | Q4_K | 4.92GB | β
Available | π’ IMatrix | π¦ No
|
67 |
| [Llama-3-Instruct-8B-SimPO.Q4_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q4_K_S.gguf) | Q4_K_S | 4.69GB | β
Available | π’ IMatrix | π¦ No
|
|
|
|
|
|
|
68 |
| [Llama-3-Instruct-8B-SimPO.IQ4_NL.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ4_NL.gguf) | IQ4_NL | 4.68GB | β
Available | π’ IMatrix | π¦ No
|
69 |
| [Llama-3-Instruct-8B-SimPO.IQ4_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ4_XS.gguf) | IQ4_XS | 4.45GB | β
Available | π’ IMatrix | π¦ No
|
70 |
+
| [Llama-3-Instruct-8B-SimPO.Q3_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K.gguf) | Q3_K | 4.02GB | β
Available | π’ IMatrix | π¦ No
|
71 |
+
| [Llama-3-Instruct-8B-SimPO.Q3_K_L.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K_L.gguf) | Q3_K_L | 4.32GB | β
Available | π’ IMatrix | π¦ No
|
72 |
+
| [Llama-3-Instruct-8B-SimPO.Q3_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q3_K_S.gguf) | Q3_K_S | 3.66GB | β
Available | π’ IMatrix | π¦ No
|
73 |
| [Llama-3-Instruct-8B-SimPO.IQ3_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_M.gguf) | IQ3_M | 3.78GB | β
Available | π’ IMatrix | π¦ No
|
74 |
| [Llama-3-Instruct-8B-SimPO.IQ3_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_S.gguf) | IQ3_S | 3.68GB | β
Available | π’ IMatrix | π¦ No
|
75 |
| [Llama-3-Instruct-8B-SimPO.IQ3_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_XS.gguf) | IQ3_XS | 3.52GB | β
Available | π’ IMatrix | π¦ No
|
76 |
| [Llama-3-Instruct-8B-SimPO.IQ3_XXS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | β
Available | π’ IMatrix | π¦ No
|
77 |
+
| [Llama-3-Instruct-8B-SimPO.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K.gguf) | Q2_K | 3.18GB | β
Available | π’ IMatrix | π¦ No
|
78 |
+
| [Llama-3-Instruct-8B-SimPO.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.Q2_K_S.gguf) | Q2_K_S | 2.99GB | β
Available | π’ IMatrix | π¦ No
|
79 |
| [Llama-3-Instruct-8B-SimPO.IQ2_M.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_M.gguf) | IQ2_M | 2.95GB | β
Available | π’ IMatrix | π¦ No
|
80 |
| [Llama-3-Instruct-8B-SimPO.IQ2_S.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_S.gguf) | IQ2_S | 2.76GB | β
Available | π’ IMatrix | π¦ No
|
81 |
| [Llama-3-Instruct-8B-SimPO.IQ2_XS.gguf](https://huggingface.co/legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF/blob/main/Llama-3-Instruct-8B-SimPO.IQ2_XS.gguf) | IQ2_XS | 2.61GB | β
Available | π’ IMatrix | π¦ No
|
|
|
91 |
```
|
92 |
Download the specific file you want:
|
93 |
```
|
94 |
+
huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.BF16.gguf" --local-dir ./
|
95 |
```
|
96 |
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
|
97 |
```
|
98 |
+
huggingface-cli download legraphista/Llama-3-Instruct-8B-SimPO-IMat-GGUF --include "Llama-3-Instruct-8B-SimPO.BF16/*" --local-dir ./
|
99 |
# see FAQ for merging GGUF's
|
100 |
```
|
101 |
|
|
|
133 |
|
134 |
### Llama.cpp
|
135 |
```
|
136 |
+
llama.cpp/main -m Llama-3-Instruct-8B-SimPO.BF16.gguf --color -i -p "prompt here (according to the chat template)"
|
137 |
```
|
138 |
|
139 |
---
|
|
|
148 |
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
|
149 |
- Download the appropriate zip for your system from the latest release
|
150 |
- Unzip the archive and you should be able to find `gguf-split`
|
151 |
+
2. Locate your GGUF chunks folder (ex: `Llama-3-Instruct-8B-SimPO.BF16`)
|
152 |
+
3. Run `gguf-split --merge Llama-3-Instruct-8B-SimPO.BF16/Llama-3-Instruct-8B-SimPO.BF16-00001-of-XXXXX.gguf Llama-3-Instruct-8B-SimPO.BF16.gguf`
|
153 |
- Make sure to point `gguf-split` to the first chunk of the split.
|
154 |
|
155 |
---
|