kaiokendev
commited on
Commit
•
ca2e92e
1
Parent(s):
27a8de1
Update README.md
Browse files
README.md
CHANGED
@@ -10,8 +10,8 @@ Tests have shown that the model does indeed leverage the extended context at 8K.
|
|
10 |
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
|
11 |
|
12 |
#### Looking for Merged & Quantized Models?
|
13 |
-
30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
|
14 |
-
30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
|
15 |
|
16 |
|
17 |
#### Training Details
|
|
|
10 |
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
|
11 |
|
12 |
#### Looking for Merged & Quantized Models?
|
13 |
+
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
|
14 |
+
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
|
15 |
|
16 |
|
17 |
#### Training Details
|