Text-to-Image
Diffusers
English

when the PhotoMakerLoader node loads the photomaker-v2.bin file I get this error

#2
by cfernando - opened

Error occurred when executing PhotoMakerLoader:

Error(s) in loading state_dict for PhotoMakerIDEncoder:
Unexpected key(s) in state_dict: "qformer_perceiver.perceiver_resampler.layers.0.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.0.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.0.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.0.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.1.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.1.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.1.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.2.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.2.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.2.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.3.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.3.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.3.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.3.weight", "qformer_perceiver.perceiver_resampler.norm_out.bias", "qformer_perceiver.perceiver_resampler.norm_out.weight", "qformer_perceiver.perceiver_resampler.proj_in.bias", "qformer_perceiver.perceiver_resampler.proj_in.weight", "qformer_perceiver.perceiver_resampler.proj_out.bias", "qformer_perceiver.perceiver_resampler.proj_out.weight", "qformer_perceiver.token_norm.bias", "qformer_perceiver.token_norm.weight", "qformer_perceiver.token_proj.0.bias", "qformer_perceiver.token_proj.0.weight", "qformer_perceiver.token_proj.2.bias", "qformer_perceiver.token_proj.2.weight".

File "/home/fer/boldi/IA/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/fer/boldi/IA/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/fer/boldi/IA/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/fer/boldi/IA/ComfyUI/comfy_extras/nodes_photomaker.py", line 134, in load_photomaker_model
photomaker_model.load_state_dict(data)
File "/home/fer/boldi/IA/ComfyUI/venv10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2189, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError 11 PhotoMakerLoader

Error occurred when executing PhotoMakerLoader:

Error(s) in loading state_dict for PhotoMakerIDEncoder:
Unexpected key(s) in state_dict: "qformer_perceiver.perceiver_resampler.layers.0.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.0.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.0.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.0.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.1.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.1.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.1.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.2.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.2.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.2.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.3.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.3.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.3.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.3.weight", "qformer_perceiver.perceiver_resampler.norm_out.bias", "qformer_perceiver.perceiver_resampler.norm_out.weight", "qformer_perceiver.perceiver_resampler.proj_in.bias", "qformer_perceiver.perceiver_resampler.proj_in.weight", "qformer_perceiver.perceiver_resampler.proj_out.bias", "qformer_perceiver.perceiver_resampler.proj_out.weight", "qformer_perceiver.token_norm.bias", "qformer_perceiver.token_norm.weight", "qformer_perceiver.token_proj.0.bias", "qformer_perceiver.token_proj.0.weight", "qformer_perceiver.token_proj.2.bias", "qformer_perceiver.token_proj.2.weight".

File "S:\ComfyUI_Windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\ComfyUI_Windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\ComfyUI_Windows_portable\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\ComfyUI_Windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\ComfyUI_Windows_portable\ComfyUI\comfy_extras\nodes_photomaker.py", line 134, in load_photomaker_model
photomaker_model.load_state_dict(data)
File "S:\ComfyUI_Windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

same error encountered.....

same here

same here ;-)

same error for me too.

same here

Same here, I'm guessing the architecture is slightly different and the code doesn't expect it.

The ComfyUI built-in one doesn’t yet support V2, so I made one myself based on ZHO’s.

https://github.com/edwios/ComfyUI-PhotoMakerV2-ZHO/tree/main

Credits of course go to ZHO and TencentARC.

Sign up or log in to comment