Llama.cpp conversion fail, ffn_gate.weight not found
#4
by
Sciumo
- opened
main branch pulled today from llama.cpp
convert.py to fp16
llama_model_load: error loading model: create_tensor: tensor 'blk.0.ffn_gate.weight' not found
shrug sounds like a bug in llama.cpp
use convert-hf-to-ggml.py