Problem loading safe tensor file using FluxTransformer2DModel

#8
by ukaprch - opened

So apparently with the new Fill model number of channels changes from the normal 64 to 384. The FluxTransformer2DModel is apparently not geared to accept this. Even with using the parameters below you get an error:

from diffusers import FluxFillPipeline
#base_model = "black-forest-labs/FLUX.1-Fill-dev"
base_model = "./flux-dev/inpaint/flux1-fill-dev.safetensors"
dtype = torch.bfloat16

transformer = FluxTransformer2DModel.from_single_file(base_model, subfolder="transformer", low_cpu_mem_usage=False, ignore_mismatched_sizes=True, torch_dtype=dtype)

Message=Cannot load because x_embedder.weight expected shape tensor(..., device='meta', size=(3072, 64)), but got torch.Size([3072, 384]). If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False and ignore_mismatched_sizes=True. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
Source=C:\Users\xxxxx\source\repos\AI\runtimes\bin\windows\Python312\Lib\site-packages\diffusers\models\model_loading_utils.py
StackTrace:
File "C:\Users\xxxxx\source\repos\AI\runtimes\bin\windows\Python312\Lib\site-packages\diffusers\models\model_loading_utils.py", line 223, in load_model_dict_into_meta
raise ValueError(
File "C:\Users\xxxxx\source\repos\AI\runtimes\bin\windows\Python312\Lib\site-packages\diffusers\loaders\single_file_model.py", line 299, in from_single_file
unexpected_keys = load_model_dict_into_meta(model, diffusers_format_checkpoint, dtype=torch_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxx\source\repos\AI\runtimes\bin\windows\Python312\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxx\source\repos\AI\modules\Inpaint-Anything\app\app.py", line 194, in quantize_model
transformer = FluxTransformer2DModel.from_single_file(base_model, subfolder="transformer", low_cpu_mem_usage=False, ignore_mismatched_sizes=True, torch_dtype=dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxx\source\repos\AI\modules\Inpaint-Anything\app\app.py", line 404, in setup_model (Current frame)
quantize_model()
File "C:\Users\xxxxx\source\repos\AI\runtimes\bin\windows\Python312\Lib\site-packages\gradio\utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "C:\Users\xxxxx\source\repos\AI\runtimes\bin\windows\Python312\Lib\site-packages\anyio_backends_asyncio.py", line 859, in run
result = context.run(func, *args)
File "C:\Users\xxxxx\source\repos\AI\runtimes\bin\windows\Python312\Lib\threading.py", line 1075, in _bootstrap_inner
self.run()
File "C:\Users\xxxxx\source\repos\AI\runtimes\bin\windows\Python312\Lib\threading.py", line 1032, in _bootstrap
self._bootstrap_inner()
ValueError: Cannot load because x_embedder.weight expected shape tensor(..., device='meta', size=(3072, 64)), but got torch.Size([3072, 384]). If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False and ignore_mismatched_sizes=True. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.

我遇到了一个很奇怪的问题,fill模型在扩图的时候,出现异常,比如说,一张图片,我将它上下都扩展的时候,在采样器中,刚开始会很正常,大概在4到5步的时候,扩展部分会突然被噪波填充,出来的图就没法看,刚开始以为是diffusers没安装,但是安装之后,还是一样的,用你们官方出的工作流,也是一样的。这是什么原因呢,太奇怪了
ComfyUI_00106_.png

而且这种情况在局部重绘的时候完全正常,只是扩图的时候会出现

你好,请问这个加载你消耗了多少显存?我的24g会oom

Sign up or log in to comment