Missing pre-tokenizer type, model should be regenerated
I am seeing the following when starting the llama.cpp server:
"""
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llama-cpp-server-1 | llm_load_vocab:
llama-cpp-server-1 | llm_load_vocab: ************************************
llama-cpp-server-1 | llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llama-cpp-server-1 | llm_load_vocab: CONSIDER REGENERATING THE MODEL
llama-cpp-server-1 | llm_load_vocab: ************************************
llama-cpp-server-1 | llm_load_vocab:
llama-cpp-server-1 | llm_load_vocab: special tokens definition check successful ( 1008/256000 ).
"""
@andrewcanis Could you regenerate the model or am I doing something wrong?
I was seeing this same output using ollama (llama.cpp wrapper), I'll see if I can find any solutions. I'll be back if I find one.