tokenizer.model ?
#1
by
pipilok
- opened
I’m trying to convert the model weights on my own, but I keep getting this error. =( Could it be that the tokenizer.model file is missing, or am I doing something wrong?
INFO:hf-to-gguf:Set model tokenizer
Traceback (most recent call last):
File "C:\Users\User\Documents\GitHub\llama.cpp\convert_hf_to_gguf.py", line 4462, in <module>
main()
File "C:\Users\User\Documents\GitHub\llama.cpp\convert_hf_to_gguf.py", line 4456, in main
model_instance.write()
File "C:\Users\User\Documents\GitHub\llama.cpp\convert_hf_to_gguf.py", line 435, in write
self.prepare_metadata(vocab_only=False)
File "C:\Users\User\Documents\GitHub\llama.cpp\convert_hf_to_gguf.py", line 428, in prepare_metadata
self.set_vocab()
File "C:\Users\User\Documents\GitHub\llama.cpp\convert_hf_to_gguf.py", line 2137, in set_vocab
raise ValueError(f'Error: Missing {tokenizer_path}')
ValueError: Error: Missing F:\Models\phi-4\tokenizer.model
You need a patch to allow converting the phi4 model.
This PR should fix it: https://github.com/ggerganov/llama.cpp/pull/10817