Upload folder using huggingface_hub
Browse files- README.md +16 -18
- SmolLM-135M-Instruct-Q2_K.gguf +2 -2
- SmolLM-135M-Instruct-Q3_K_L.gguf +2 -2
- SmolLM-135M-Instruct-Q3_K_M.gguf +2 -2
- SmolLM-135M-Instruct-Q3_K_S.gguf +2 -2
- SmolLM-135M-Instruct-Q4_0.gguf +2 -2
- SmolLM-135M-Instruct-Q4_K_M.gguf +2 -2
- SmolLM-135M-Instruct-Q4_K_S.gguf +2 -2
- SmolLM-135M-Instruct-Q5_0.gguf +2 -2
- SmolLM-135M-Instruct-Q5_K_M.gguf +2 -2
- SmolLM-135M-Instruct-Q5_K_S.gguf +2 -2
- SmolLM-135M-Instruct-Q6_K.gguf +2 -2
- SmolLM-135M-Instruct-Q8_0.gguf +2 -2
README.md
CHANGED
@@ -1,10 +1,10 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
-
base_model:
|
4 |
tags:
|
5 |
- alignment-handbook
|
6 |
- trl
|
7 |
-
-
|
8 |
- TensorBlock
|
9 |
- GGUF
|
10 |
datasets:
|
@@ -28,13 +28,12 @@ language:
|
|
28 |
</div>
|
29 |
</div>
|
30 |
|
31 |
-
##
|
32 |
|
33 |
-
This repo contains GGUF format model files for [
|
34 |
|
35 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
36 |
|
37 |
-
|
38 |
<div style="text-align: left; margin: 20px 0;">
|
39 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
40 |
Run them on the TensorBlock client using your local machine ↗
|
@@ -43,7 +42,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
43 |
|
44 |
## Prompt template
|
45 |
|
46 |
-
|
47 |
```
|
48 |
<|im_start|>system
|
49 |
{system_prompt}<|im_end|>
|
@@ -56,18 +54,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
56 |
|
57 |
| Filename | Quant type | File Size | Description |
|
58 |
| -------- | ---------- | --------- | ----------- |
|
59 |
-
| [SmolLM-135M-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q2_K.gguf) | Q2_K | 0.
|
60 |
-
| [SmolLM-135M-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q3_K_S.gguf) | Q3_K_S | 0.
|
61 |
-
| [SmolLM-135M-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q3_K_M.gguf) | Q3_K_M | 0.
|
62 |
-
| [SmolLM-135M-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.
|
63 |
-
| [SmolLM-135M-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q4_0.gguf) | Q4_0 | 0.
|
64 |
-
| [SmolLM-135M-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.
|
65 |
-
| [SmolLM-135M-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.
|
66 |
-
| [SmolLM-135M-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q5_0.gguf) | Q5_0 | 0.
|
67 |
-
| [SmolLM-135M-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q5_K_S.gguf) | Q5_K_S | 0.
|
68 |
-
| [SmolLM-135M-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q5_K_M.gguf) | Q5_K_M | 0.
|
69 |
-
| [SmolLM-135M-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q6_K.gguf) | Q6_K | 0.
|
70 |
-
| [SmolLM-135M-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q8_0.gguf) | Q8_0 | 0.
|
71 |
|
72 |
|
73 |
## Downloading instruction
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
base_model: unsloth/SmolLM-135M-Instruct
|
4 |
tags:
|
5 |
- alignment-handbook
|
6 |
- trl
|
7 |
+
- unsloth
|
8 |
- TensorBlock
|
9 |
- GGUF
|
10 |
datasets:
|
|
|
28 |
</div>
|
29 |
</div>
|
30 |
|
31 |
+
## unsloth/SmolLM-135M-Instruct - GGUF
|
32 |
|
33 |
+
This repo contains GGUF format model files for [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct).
|
34 |
|
35 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
36 |
|
|
|
37 |
<div style="text-align: left; margin: 20px 0;">
|
38 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
39 |
Run them on the TensorBlock client using your local machine ↗
|
|
|
42 |
|
43 |
## Prompt template
|
44 |
|
|
|
45 |
```
|
46 |
<|im_start|>system
|
47 |
{system_prompt}<|im_end|>
|
|
|
54 |
|
55 |
| Filename | Quant type | File Size | Description |
|
56 |
| -------- | ---------- | --------- | ----------- |
|
57 |
+
| [SmolLM-135M-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q2_K.gguf) | Q2_K | 0.088 GB | smallest, significant quality loss - not recommended for most purposes |
|
58 |
+
| [SmolLM-135M-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q3_K_S.gguf) | Q3_K_S | 0.088 GB | very small, high quality loss |
|
59 |
+
| [SmolLM-135M-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q3_K_M.gguf) | Q3_K_M | 0.094 GB | very small, high quality loss |
|
60 |
+
| [SmolLM-135M-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q3_K_L.gguf) | Q3_K_L | 0.098 GB | small, substantial quality loss |
|
61 |
+
| [SmolLM-135M-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q4_0.gguf) | Q4_0 | 0.092 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
62 |
+
| [SmolLM-135M-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q4_K_S.gguf) | Q4_K_S | 0.102 GB | small, greater quality loss |
|
63 |
+
| [SmolLM-135M-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q4_K_M.gguf) | Q4_K_M | 0.105 GB | medium, balanced quality - recommended |
|
64 |
+
| [SmolLM-135M-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q5_0.gguf) | Q5_0 | 0.105 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
65 |
+
| [SmolLM-135M-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q5_K_S.gguf) | Q5_K_S | 0.110 GB | large, low quality loss - recommended |
|
66 |
+
| [SmolLM-135M-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q5_K_M.gguf) | Q5_K_M | 0.112 GB | large, very low quality loss - recommended |
|
67 |
+
| [SmolLM-135M-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q6_K.gguf) | Q6_K | 0.138 GB | very large, extremely low quality loss |
|
68 |
+
| [SmolLM-135M-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/SmolLM-135M-Instruct-GGUF/blob/main/SmolLM-135M-Instruct-Q8_0.gguf) | Q8_0 | 0.145 GB | very large, extremely low quality loss - not recommended |
|
69 |
|
70 |
|
71 |
## Downloading instruction
|
SmolLM-135M-Instruct-Q2_K.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5d067ec135bdc4614f30e48ba08dda62ce102d778853b96351e0f24f013888e0
|
3 |
+
size 88202208
|
SmolLM-135M-Instruct-Q3_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1fbc29bd825b73b3a8ed69d342a34103baf298c0072da312e32f390d9207dbf8
|
3 |
+
size 97533408
|
SmolLM-135M-Instruct-Q3_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e6af58e98ad7a595a26cdae94aa0ed39ce4c6094339adf9b5e20f53a2ac14403
|
3 |
+
size 93510624
|
SmolLM-135M-Instruct-Q3_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2b1ed8531636f03d869832c2401b0085f9ed6c9ee064d8fa80f2a2d14cf946fd
|
3 |
+
size 88202208
|
SmolLM-135M-Instruct-Q4_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1fb3eae4cb87ef3da7115af435b9f0b51c6b20ce1268d1587000dd12b05993e3
|
3 |
+
size 91727328
|
SmolLM-135M-Instruct-Q4_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1c6d7a00aa4c59a2d9124b87789689785dc0f5f7bffd4fdba2f3b61ae6b521ff
|
3 |
+
size 105454560
|
SmolLM-135M-Instruct-Q4_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:52bf33eda0d83226b86329364934e561cfb657259d495cdd1dc8881d8f9429eb
|
3 |
+
size 102040032
|
SmolLM-135M-Instruct-Q5_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c19f93904cb68c38846543f16eb1d0211888b9654c769c38e301c6a3fc40324e
|
3 |
+
size 104998368
|
SmolLM-135M-Instruct-Q5_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:afd9d919b7e9d905e5f301f4c44b75aa901d6f715b13fea20c49635a2c33b304
|
3 |
+
size 112103904
|
SmolLM-135M-Instruct-Q5_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f6ff0a815ff4f7ee94d034dddb3c7f44e8717ebc93df459812ccd083d08c305c
|
3 |
+
size 109975008
|
SmolLM-135M-Instruct-Q6_K.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5aa320992275d4aedb82269ffe7b5ebe619365e9ff89bcd2d1b6a7b52e49a4bb
|
3 |
+
size 138383328
|
SmolLM-135M-Instruct-Q8_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cf240fa487339e716ebac149716cd2ed24854b1a3cd0251d7cb2431f0499b016
|
3 |
+
size 144811488
|