Spaces:
Running
on
A10G
Running
on
A10G
[bug] asymmetric t5 models fail to quantize
#126
by
pszemraj
- opened
Hi! I seem to be finding that T5 models with different # of layers in encoder and decoder, regardless of size, seem to fail while my 'custom' pretrained symmetric t5 model is fine. for example:
- https://hf.co/BEE-spoke-data/tFINE-900m-e16-d32-instruct
- https://hf.co/google/t5-efficient-large-dl12
both fail.
I get this error message:
error message for 900m - click to expand
``` Error: Error converting to fp16: b'INFO:hf-to-gguf:Loading model: tFINE-900m-e16-d32-instruct\nINFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only\nINFO:hf-to-gguf:Exporting model...\nINFO:hf-to-gguf:gguf: loading model part 'model.safetensors'\nINFO:hf-to-gguf:dec.blk.0.attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.0.attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.0.attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.0.attn_rel_b.weight, torch.float32 --> F16, shape = {16, 48}\nINFO:hf-to-gguf:dec.blk.0.attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.0.attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.0.cross_attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.0.cross_attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.0.cross_attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.0.cross_attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.0.cross_attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.0.ffn_gate.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.0.ffn_up.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.0.ffn_down.weight, torch.float32 --> F16, shape = {3072, 1024}\nINFO:hf-to-gguf:dec.blk.0.ffn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.1.attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.1.attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.1.attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.1.attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.1.attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.1.cross_attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.1.cross_attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.1.cross_attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.1.cross_attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.1.cross_attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.1.ffn_gate.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.1.ffn_up.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.1.ffn_down.weight, torch.float32 --> F16, shape = {3072, 1024}\nINFO:hf-to-gguf:dec.blk.1.ffn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.10.attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.10.attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.10.attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.10.attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.10.attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.10.cross_attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.10.cross_attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.10.cross_attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.10.cross_attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.10.cross_attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.10.ffn_gate.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.10.ffn_up.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.10.ffn_down.weight, torch.float32 --> F16, shape = {3072, 1024}\nINFO:hf-to-gguf:dec.blk.10.ffn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.11.attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.11.attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.11.attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.11.attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.11.attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.11.cross_attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.11.cross_attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.11.cross_attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.11.cross_attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.11.cross_attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.11.ffn_gate.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.11.ffn_up.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.11.ffn_down.weight, torch.float32 --> F16, shape = {3072, 1024}\nINFO:hf-to-gguf:dec.blk.11.ffn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.12.attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.12.attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.12.attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.12.attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.12.attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.12.cross_attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.12.cross_attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.12.cross_attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.12.cross_attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.12.cross_attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.12.ffn_gate.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.12.ffn_up.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.12.ffn_down.weight, torch.float32 --> F16, shape = {3072, 1024}\nINFO:hf-to-gguf:dec.blk.12.ffn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.13.attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.13.attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.13.attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.13.attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.13.attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.13.cross_attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.13.cross_attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.13.cross_attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.13.cross_attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.13.cross_attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.13.ffn_gate.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.13.ffn_up.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.13.ffn_down.weight, torch.float32 --> F16, shape = {3072, 1024}\nINFO:hf-to-gguf:dec.blk.13.ffn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.14.attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.14.attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.14.attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.14.attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.14.attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.14.cross_attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.14.cross_attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.14.cross_attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.14.cross_attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.14.cross_attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.14.ffn_gate.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.14.ffn_up.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.14.ffn_down.weight, torch.float32 --> F16, shape = {3072, 1024}\nINFO:hf-to-gguf:dec.blk.14.ffn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.15.attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.15.attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.15.attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.15.attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.15.attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.15.cross_attn_k.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.15.cross_attn_o.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.15.cross_attn_q.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.15.cross_attn_v.weight, torch.float32 --> F16, shape = {1024, 1024}\nINFO:hf-to-gguf:dec.blk.15.cross_attn_norm.weight, torch.float32 --> F32, shape = {1024}\nINFO:hf-to-gguf:dec.blk.15.ffn_gate.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.15.ffn_up.weight, torch.float32 --> F16, shape = {1024, 3072}\nINFO:hf-to-gguf:dec.blk.15.ffn_down.weight, torch.float32 --> F16, shape = {3072, 1024}\nINFO:hf-to-gguf:dec.blk.15.ffn_norm.weight, torch.float32 --> F32, shape = {1024}\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4149, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4143, in main\n model_instance.write()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 394, in write\n self.prepare_tensors()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 283, in prepare_tensors\n for new_name, data in ((n, d.squeeze().numpy()) for n, d in self.modify_tensors(data_torch, name, bid)):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3407, in modify_tensors\n return [(self.map_tensor_name(name), data_torch)]\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 203, in map_tensor_name\n raise ValueError(f"Can not map tensor {name!r}")\nValueError: Can not map tensor 'decoder.block.16.layer.0.SelfAttention.k.weight'\n' ```for the google one:
error message for google/t5-efficient-large-dl12 - click to expand
``` Error: Error quantizing: b'main: build = 3662 (7605ae7d)\nmain: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu\nmain: quantizing 't5-efficient-large-dl12.fp16.gguf' to 't5-efficient-large-dl12-q4_k_m.gguf' as Q4_K_M\nllama_model_loader: loaded meta data with 36 key-value pairs and 354 tensors from t5-efficient-large-dl12.fp16.gguf (version GGUF V3 (latest))\nllama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.\nllama_model_loader: - kv 0: general.architecture str = t5\nllama_model_loader: - kv 1: general.type str = model\nllama_model_loader: - kv 2: general.name str = T5 Efficient Large Dl12\nllama_model_loader: - kv 3: general.finetune str = dl12\nllama_model_loader: - kv 4: general.basename str = t5-efficient\nllama_model_loader: - kv 5: general.size_label str = large\nllama_model_loader: - kv 6: general.license str = apache-2.0\nllama_model_loader: - kv 7: general.tags arr[str,1] = ["deep-narrow"]\nllama_model_loader: - kv 8: general.languages arr[str,1] = ["en"]\nllama_model_loader: - kv 9: general.datasets arr[str,1] = ["c4"]\nllama_model_loader: - kv 10: t5.context_length u32 = 512\nllama_model_loader: - kv 11: t5.embedding_length u32 = 1024\nllama_model_loader: - kv 12: t5.feed_forward_length u32 = 4096\nllama_model_loader: - kv 13: t5.block_count u32 = 24\nllama_model_loader: - kv 14: t5.attention.head_count u32 = 16\nllama_model_loader: - kv 15: t5.attention.key_length u32 = 64\nllama_model_loader: - kv 16: t5.attention.value_length u32 = 64\nllama_model_loader: - kv 17: t5.attention.layer_norm_epsilon f32 = 0.000001\nllama_model_loader: - kv 18: t5.attention.relative_buckets_count u32 = 32\nllama_model_loader: - kv 19: t5.attention.layer_norm_rms_epsilon f32 = 0.000001\nllama_model_loader: - kv 20: t5.decoder_start_token_id u32 = 0\nllama_model_loader: - kv 21: general.file_type u32 = 1\nllama_model_loader: - kv 22: tokenizer.ggml.model str = t5\nllama_model_loader: - kv 23: tokenizer.ggml.pre str = default\nllama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,32128] = ["", "", "", "\xe2\x96\x81", "X"...\nllama_model_loader: - kv 25: tokenizer.ggml.scores arr[f32,32128] = [0.000000, 0.000000, 0.000000, -2.012...\nllama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,32128] = [3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...\nllama_model_loader: - kv 27: tokenizer.ggml.add_space_prefix bool = true\nllama_model_loader: - kv 28: tokenizer.ggml.remove_extra_whitespaces bool = true\nllama_model_loader: - kv 29: tokenizer.ggml.precompiled_charsmap arr[u8,237539] = [0, 180, 2, 0, 0, 132, 0, 0, 0, 0, 0,...\nllama_model_loader: - kv 30: tokenizer.ggml.eos_token_id u32 = 1\nllama_model_loader: - kv 31: tokenizer.ggml.unknown_token_id u32 = 2\nllama_model_loader: - kv 32: tokenizer.ggml.padding_token_id u32 = 0\nllama_model_loader: - kv 33: tokenizer.ggml.add_bos_token bool = false\nllama_model_loader: - kv 34: tokenizer.ggml.add_eos_token bool = true\nllama_model_loader: - kv 35: general.quantization_version u32 = 2\nllama_model_loader: - type f32: 86 tensors\nllama_model_loader: - type f16: 268 tensors\nsrc/llama.cpp:17354: GGML_ASSERT((qs.n_attention_wv == n_attn_layer) && "n_attention_wv is unexpected") failed\n./llama.cpp/llama-quantize(+0x27b7db)[0x55f6579007db]\n./llama.cpp/llama-quantize(+0x27d3f7)[0x55f6579023f7]\n./llama.cpp/llama-quantize(+0x363cb9)[0x55f6579e8cb9]\n./llama.cpp/llama-quantize(+0x363f84)[0x55f6579e8f84]\n./llama.cpp/llama-quantize(+0x5e8f8)[0x55f6576e38f8]\n/usr/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f2d42bd4d90]\n/usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f2d42bd4e40]\n./llama.cpp/llama-quantize(+0x5f695)[0x55f6576e4695]\nAborted (core dumped)\n' ```I tested with flan-t5-large (symmetric) and that works fine, so it's not the model size. How to resolve?