FileNotFoundError: No such file or directory: "Llama3_pubmedqa_finetune/model-00001-of-00003.safetensors"
#1
by
Milor123
- opened
I cant convert this model in gguf
python convert.py clonadosmios/Llama3_pubmedqa_finetune/ --outfile Llama3_pubmedqa_finetune-q8_0.gguf --outtype q8_0 --vocab-type bpe
INFO:convert:Loading model file clonadosmios/Llama3_pubmedqa_finetune/model-00001-of-00002.safetensors
Traceback (most recent call last):
File "/home/noe/clonados/llama.cpp/convert.py", line 1567, in <module>
main()
File "/home/noe/clonados/llama.cpp/convert.py", line 1499, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/noe/clonados/llama.cpp/convert.py", line 1378, in load_some_model
models_plus.append(lazy_load_file(path))
^^^^^^^^^^^^^^^^^^^^
File "/home/noe/clonados/llama.cpp/convert.py", line 982, in lazy_load_file
return lazy_load_safetensors_file(fp, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/noe/clonados/llama.cpp/convert.py", line 961, in lazy_load_safetensors_file
model = {name: convert(info) for (name, info) in header.items() if name != '__metadata__'}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/noe/clonados/llama.cpp/convert.py", line 961, in <dictcomp>
model = {name: convert(info) for (name, info) in header.items() if name != '__metadata__'}
^^^^^^^^^^^^^
File "/home/noe/clonados/llama.cpp/convert.py", line 949, in convert
data_type = SAFETENSORS_DATA_TYPES[info['dtype']]
and using
python convert-hf-to-gguf.py clonadosmios/Llama3_pubmedqa_finetune
INFO:hf-to-gguf:Loading model: Llama3_pubmedqa_finetune
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 8192
INFO:hf-to-gguf:gguf: embedding length = 4096
INFO:hf-to-gguf:gguf: feed forward length = 14336
INFO:hf-to-gguf:gguf: head count = 32
INFO:hf-to-gguf:gguf: key-value head count = 8
INFO:hf-to-gguf:gguf: rope theta = 500000.0
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:Set model tokenizer
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO:gguf.vocab:Adding 280147 merge(s).
INFO:gguf.vocab:Setting special token type bos to 128000
INFO:gguf.vocab:Setting special token type eos to 128001
INFO:gguf.vocab:Setting special token type pad to 128000
INFO:gguf.vocab:Setting chat_template to {% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>
'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>
' }}{% endif %}
INFO:hf-to-gguf:Exporting model to 'clonadosmios/Llama3_pubmedqa_finetune/ggml-model-f16.gguf'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00003.safetensors'
Traceback (most recent call last):
File "/home/noe/clonados/llama.cpp/convert-hf-to-gguf.py", line 2978, in <module>
main()
File "/home/noe/clonados/llama.cpp/convert-hf-to-gguf.py", line 2972, in main
model_instance.write()
File "/home/noe/clonados/llama.cpp/convert-hf-to-gguf.py", line 179, in write
self.write_tensors()
File "/home/noe/clonados/llama.cpp/convert-hf-to-gguf.py", line 1421, in write_tensors
for name, data_torch in self.get_tensors():
File "/home/noe/clonados/llama.cpp/convert-hf-to-gguf.py", line 86, in get_tensors
ctx = cast(ContextManager[Any], safe_open(self.dir_model / part_name, framework="pt", device="cpu"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: No such file or directory: "clonadosmios/Llama3_pubmedqa_finetune/model-00001-of-00003.safetensors"