mradermacher
commited on
Commit
•
2350f2b
1
Parent(s):
210c8c1
auto-patch README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
@@ -15,7 +15,7 @@ tags:
|
|
15 |
<!-- ### output_tensor_quantised: 1 -->
|
16 |
<!-- ### convert_type: -->
|
17 |
<!-- ### vocab_type: -->
|
18 |
-
static quants of https://huggingface.co/
|
19 |
|
20 |
You should use `--override-kv tokenizer.ggml.pre=str:llama3` and a current llama.cpp version to work around a bug in llama.cpp that made these quants. (see https://old.reddit.com/r/LocalLLaMA/comments/1cg0z1i/bpe_pretokenization_support_is_now_merged_llamacpp/?share_id=5dBFB9x0cOJi8vbr-Murh)
|
21 |
|
|
|
1 |
---
|
2 |
+
base_model: Dampfinchen/Llama-3-8B-Ultra-Instruct
|
3 |
language:
|
4 |
- en
|
5 |
library_name: transformers
|
|
|
15 |
<!-- ### output_tensor_quantised: 1 -->
|
16 |
<!-- ### convert_type: -->
|
17 |
<!-- ### vocab_type: -->
|
18 |
+
static quants of https://huggingface.co/Dampfinchen/Llama-3-8B-Ultra-Instruct
|
19 |
|
20 |
You should use `--override-kv tokenizer.ggml.pre=str:llama3` and a current llama.cpp version to work around a bug in llama.cpp that made these quants. (see https://old.reddit.com/r/LocalLLaMA/comments/1cg0z1i/bpe_pretokenization_support_is_now_merged_llamacpp/?share_id=5dBFB9x0cOJi8vbr-Murh)
|
21 |
|