Have these quants had their pre-tokenizer fixed?
#8
by
smcleod
- opened
Many llama 3 quantizations were created with a missing pre-tokenizer, has this been fixed in these quants?
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
@smcleod Please see also this post if you use llama.cpp: https://www.reddit.com/r/LocalLLaMA/comments/1cg0z1i/bpe_pretokenization_support_is_now_merged_llamacpp/
smcleod
changed discussion status to
closed