Stop token fix?
#1
by
americancookie
- opened
Hi Michael,
Thanks for your quants, Vielen Dank!
What stop token for LLama 3 quants do you use during the conversion?
Please see the discussion at: https://www.reddit.com/r/LocalLLaMA/comments/1c7dkxh/tutorial_how_to_make_llama3instruct_ggufs_less/
Cheers :)
americancookie
changed discussion title from
End of token fix?
to Stop token fix?
I'm not using any tokens, I merely quantize the model as-is. Also, the "fix" you link to breaks the model, so should not be applied. The problem is in the software, not the gguf. The only correct way to fix this is to update your software for llama-3.
mradermacher
changed discussion status to
closed
Thank you :)!