Thanks for great model. I tried to create q8 gguf using HF gguf-my-repo but got this error:

#2
by NikolayKozloff - opened

Error converting to fp16: b'INFO:hf-to-gguf:Loading model: EuroLLM-1.7B-Instruct\nINFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only\nINFO:hf-to-gguf:Exporting model...\nINFO:hf-to-gguf:gguf: loading model part 'model.safetensors'\nINFO:hf-to-gguf:output.weight, torch.bfloat16 --> F16, shape = {2048, 128000}\nINFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F16, shape = {2048, 128000}\nINFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.0.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.0.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.0.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.0.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.0.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.0.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.1.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.1.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.1.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.1.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.1.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.1.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.1.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.1.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.10.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.10.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.10.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.10.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.10.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.10.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.10.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.10.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.11.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.11.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.11.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.11.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.11.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.11.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.11.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.11.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.12.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.12.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.12.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.12.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.12.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.12.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.12.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.13.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.13.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.13.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.13.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.13.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.13.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.13.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.13.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.14.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.14.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.14.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.14.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.14.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.14.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.14.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.14.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.15.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.15.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.15.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.15.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.15.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.15.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.15.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.15.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.16.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.16.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.16.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.16.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.16.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.16.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.16.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.16.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.17.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.17.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.17.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.17.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.17.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.17.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.17.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.17.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.18.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.18.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.18.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.18.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.18.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.18.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.18.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.18.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.19.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.19.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.19.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.19.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.19.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.19.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.19.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.19.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.2.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.2.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.2.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.2.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.2.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.2.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.2.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.2.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.20.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.20.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.20.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.20.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.20.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.20.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.20.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.20.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.21.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.21.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.21.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.21.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.21.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.21.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.21.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.21.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.22.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.22.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.22.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.22.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.22.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.22.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.22.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.22.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.23.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.23.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.23.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.23.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.23.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.23.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.23.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.23.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.3.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.3.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.3.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.3.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.3.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.3.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.3.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.3.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.4.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.4.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.4.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.4.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.4.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.4.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.4.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.5.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.5.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.5.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.5.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.5.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.5.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.5.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.5.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.6.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.6.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.6.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.6.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.6.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.6.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.6.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.6.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.7.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.7.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.7.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.7.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.7.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.7.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.7.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.7.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.8.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.8.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.8.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.8.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.8.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.8.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.8.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.8.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.9.attn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.9.ffn_down.weight, torch.bfloat16 --> F16, shape = {5632, 2048}\nINFO:hf-to-gguf:blk.9.ffn_gate.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.9.ffn_up.weight, torch.bfloat16 --> F16, shape = {2048, 5632}\nINFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:blk.9.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:blk.9.attn_output.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.9.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 2048}\nINFO:hf-to-gguf:blk.9.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 1024}\nINFO:hf-to-gguf:output_norm.weight, torch.bfloat16 --> F32, shape = {2048}\nINFO:hf-to-gguf:Set meta model\nINFO:hf-to-gguf:Set model parameters\nINFO:hf-to-gguf:gguf: context length = 8192\nINFO:hf-to-gguf:gguf: embedding length = 2048\nINFO:hf-to-gguf:gguf: feed forward length = 5632\nINFO:hf-to-gguf:gguf: head count = 16\nINFO:hf-to-gguf:gguf: key-value head count = 8\nINFO:hf-to-gguf:gguf: rope theta = 10000.0\nINFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05\nINFO:hf-to-gguf:gguf: file type = 1\nINFO:hf-to-gguf:Set model tokenizer\nINFO:gguf.vocab:Setting special token type bos to 1\nINFO:gguf.vocab:Setting special token type eos to 4\nINFO:gguf.vocab:Setting special token type unk to 0\nINFO:gguf.vocab:Setting special token type pad to 2\nINFO:gguf.vocab:Setting add_bos_token to True\nINFO:gguf.vocab:Setting add_eos_token to False\nINFO:gguf.vocab:Setting chat_template to {%- set system_message = namespace(content='') -%}{%- if messages[0]['role'] == 'system' -%}{%- set system_message.content = messages[0]['content'] -%}{%- endif -%}<|im_start|>system\n{{ system_message.content }}<|im_end|>\n{% for message in messages -%}{%- if message['role'] != 'system' %}<|im_start|>{{ message['role'] }}\n{{ message['content'] }}<|im_end|>\n{%- endif -%}{%- endfor -%}\nINFO:hf-to-gguf:Set model quantization version\nINFO:gguf.gguf_writer:Writing the following files:\nINFO:gguf.gguf_writer:EuroLLM-1.7B-Instruct.fp16.gguf: n_tensors = 219, total_size = 3.3G\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4330, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4324, in main\n model_instance.write()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 428, in write\n self.gguf_writer.write_kv_data_to_file()\n File "/home/user/app/llama.cpp/gguf-py/gguf/gguf_writer.py", line 240, in write_kv_data_to_file\n kv_bytes += self._pack_val(val.value, val.type, add_vtype=True)\n File "/home/user/app/llama.cpp/gguf-py/gguf/gguf_writer.py", line 890, in _pack_val\n raise ValueError("All items in a GGUF array should be of the same type")\nValueError: All items in a GGUF array should be of the same type\n'

How can it be fixed?

phmartins changed discussion status to closed

Sign up or log in to comment