Could you give me a simple example?

#7
by AisingioroHao0 - opened

Which model was this trained with and how did I load these parameters. I tried "diffusers. StableDiffusionPipeline. from_ckpt ("safetensors_path")", "diffusers. StableDiffusionPipeline. from_pretrained ()" have failed

This comment has been hidden

Which model was this trained with and how did I load these parameters. I tried "diffusers. StableDiffusionPipeline. from_ckpt ("safetensors_path")", "diffusers. StableDiffusionPipeline. from_pretrained ()" have failed

1.download https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py then push it in file colab
2.use script:
!mkdir converted
!python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path checkpoint_path --from_safetensors --dump_path converted
where checkpoint_path is the path you place the check point file like "Counterfeit-V3.0_fp32.safetensors"
3.use the "converted":
scheduler = EulerDiscreteScheduler.from_pretrained("converted", subfolder="scheduler")

controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16).to("cuda")
pipe = StableDiffusionControlNetPipeline.from_pretrained("converted",scheduler=scheduler, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16)

pipe = pipe.to("cuda")
image.png

thank you so much

But I ran into some problems

Traceback (most recent call last):
  File "/home/aihao/miniconda3/envs/torch_stable/lib/python3.11/site-packages/transformers/modeling_utils.py", line 463, in load_state_dict
    return torch.load(checkpoint_file, map_location="cpu")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aihao/miniconda3/envs/torch_stable/lib/python3.11/site-packages/torch/serialization.py", line 797, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aihao/miniconda3/envs/torch_stable/lib/python3.11/site-packages/torch/serialization.py", line 283, in __init__
    super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/aihao/miniconda3/envs/torch_stable/lib/python3.11/site-packages/transformers/modeling_utils.py", line 467, in load_state_dict
    if f.read(7) == "version":
       ^^^^^^^^^
  File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "convert_original_stable_diffusion_to_diffusers.py", line 138, in <module>
    pipe = download_from_original_stable_diffusion_ckpt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aihao/miniconda3/envs/torch_stable/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1440, in download_from_original_stable_diffusion_ckpt
    safety_checker = StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker")
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aihao/miniconda3/envs/torch_stable/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2604, in from_pretrained
    state_dict = load_state_dict(resolved_archive_file)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/aihao/miniconda3/envs/torch_stable/lib/python3.11/site-packages/transformers/modeling_utils.py", line 479, in load_state_dict
    raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for '/home/aihao/.cache/huggingface/hub/models--CompVis--stable-diffusion-safety-checker/snapshots/cb41f3a270d63d454d385fc2e4f571c487c253c5/pytorch_model.bin' at '/home/aihao/.cache/huggingface/hub/models--CompVis--stable-diffusion-safety-checker/snapshots/cb41f3a270d63d454d385fc2e4f571c487c253c5/pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

can you help me analyze it

Now it works successfully. I've modified the convert script so that the load_safety_checker parameter for download_from_original_stable_diffusion_ckpt is false.But the image generated by the reuse method is garbled.
image.png

Which model was this trained with and how did I load these parameters. I tried "diffusers. StableDiffusionPipeline. from_ckpt ("safetensors_path")", "diffusers. StableDiffusionPipeline. from_pretrained ()" have failed

1.download https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py then push it in file colab
2.use script:
!mkdir converted
!python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path checkpoint_path --from_safetensors --dump_path converted
where checkpoint_path is the path you place the check point file like "Counterfeit-V3.0_fp32.safetensors"
3.use the "converted":
scheduler = EulerDiscreteScheduler.from_pretrained("converted", subfolder="scheduler")

controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16).to("cuda")
pipe = StableDiffusionControlNetPipeline.from_pretrained("converted",scheduler=scheduler, controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16)

pipe = pipe.to("cuda")
image.png

Sign up or log in to comment