Unable to convert safetensors

#39
by suburbsmedicos - opened

Hi I downloaded 3 models from civitai, and none of them work, I don't know what am I doing wrong

Specs: M1 macbook air
Sonoma 14.5
Xcode 15.4


Starting python converter
scikit-learn version 1.3.1 is not supported. Minimum required version: 0.17. Maximum required version: 1.1.2. Disabling scikit-learn conversion API.
Initializing StableDiffusionPipeline from /Users/travis/Downloads/ohmenToontastic_ohmenToontasticV2.safetensors..
Traceback (most recent call last):
  File "transformers/utils/hub.py", line 430, in cached_file
  File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
  File "huggingface_hub/file_download.py", line 1340, in hf_hub_download
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 791, in convert_ldm_clip_checkpoint
File "transformers/models/clip/configuration_clip.py", line 141, in from_pretrained
  File "transformers/configuration_utils.py", line 622, in get_config_dict
  File "transformers/configuration_utils.py", line 677, in _get_config_dict
  File "transformers/utils/hub.py", line 470, in cached_file
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "guernikatools/torch2coreml.py", line 150, in main
File "diffusers/loaders.py", line 2822, in from_single_file
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1633, in download_from_original_stable_diffusion_ckpt
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 793, in convert_ldm_clip_checkpoint
ValueError: With local_files_only set to True, you must first locally save the configuration in the following path: 'openai/clip-vit-large-patch14'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "transformers/utils/hub.py", line 430, in cached_file
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
  File "huggingface_hub/file_download.py", line 1340, in hf_hub_download
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 791, in convert_ldm_clip_checkpoint
  File "transformers/models/clip/configuration_clip.py", line 141, in from_pretrained
File "transformers/configuration_utils.py", line 622, in get_config_dict
File "transformers/configuration_utils.py", line 677, in _get_config_dict
  File "transformers/utils/hub.py", line 470, in cached_file
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "guernikatools/torch2coreml.py", line 153, in main
  File "diffusers/loaders.py", line 2822, in from_single_file
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1633, in download_from_original_stable_diffusion_ckpt
File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 793, in convert_ldm_clip_checkpoint
ValueError: With local_files_only set to True, you must first locally save the configuration in the following path: 'openai/clip-vit-large-patch14'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "guernikatools/torch2coreml.py", line 500, in <module>
File "guernikatools/torch2coreml.py", line 155, in main
  File "diffusers/loaders.py", line 2822, in from_single_file
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1442, in download_from_original_stable_diffusion_ckpt
  File "accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
ValueError: Trying to set a tensor of shape torch.Size([320, 4, 3, 3]) in "weight" (which has shape torch.Size([320, 9, 3, 3])), this look incorrect.
[5541] Failed to execute script 'torch2coreml' due to unhandled exception: Trying to set a tensor of shape torch.Size([320, 4, 3, 3]) in "weight" (which has shape torch.Size([320, 9, 3, 3])), this look incorrect.
[5541] Traceback:
Traceback (most recent call last):
  File "transformers/utils/hub.py", line 430, in cached_file
    resolved_file = hf_hub_download(
                    ^^^^^^^^^^^^^^^^
  File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
  File "huggingface_hub/file_download.py", line 1340, in hf_hub_download
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 791, in convert_ldm_clip_checkpoint
    config = CLIPTextConfig.from_pretrained(config_name, local_files_only=local_files_only)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "transformers/models/clip/configuration_clip.py", line 141, in from_pretrained
    config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "transformers/configuration_utils.py", line 622, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "transformers/configuration_utils.py", line 677, in _get_config_dict
    resolved_config_file = cached_file(
                           ^^^^^^^^^^^^
  File "transformers/utils/hub.py", line 470, in cached_file
    raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "guernikatools/torch2coreml.py", line 150, in main
  File "diffusers/loaders.py", line 2822, in from_single_file
    pipe = download_from_original_stable_diffusion_ckpt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1633, in download_from_original_stable_diffusion_ckpt
    text_model = convert_ldm_clip_checkpoint(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 793, in convert_ldm_clip_checkpoint
    raise ValueError(
ValueError: With local_files_only set to True, you must first locally save the configuration in the following path: 'openai/clip-vit-large-patch14'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "transformers/utils/hub.py", line 430, in cached_file
    resolved_file = hf_hub_download(
                    ^^^^^^^^^^^^^^^^
  File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
  File "huggingface_hub/file_download.py", line 1340, in hf_hub_download
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 791, in convert_ldm_clip_checkpoint
    config = CLIPTextConfig.from_pretrained(config_name, local_files_only=local_files_only)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "transformers/models/clip/configuration_clip.py", line 141, in from_pretrained
    config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "transformers/configuration_utils.py", line 622, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "transformers/configuration_utils.py", line 677, in _get_config_dict
    resolved_config_file = cached_file(
                           ^^^^^^^^^^^^
  File "transformers/utils/hub.py", line 470, in cached_file
    raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "guernikatools/torch2coreml.py", line 153, in main
  File "diffusers/loaders.py", line 2822, in from_single_file
    pipe = download_from_original_stable_diffusion_ckpt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1633, in download_from_original_stable_diffusion_ckpt
    text_model = convert_ldm_clip_checkpoint(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 793, in convert_ldm_clip_checkpoint
    raise ValueError(
ValueError: With local_files_only set to True, you must first locally save the configuration in the following path: 'openai/clip-vit-large-patch14'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "guernikatools/torch2coreml.py", line 500, in <module>
  File "guernikatools/torch2coreml.py", line 155, in main
  File "diffusers/loaders.py", line 2822, in from_single_file
    pipe = download_from_original_stable_diffusion_ckpt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1442, in download_from_original_stable_diffusion_ckpt
    set_module_tensor_to_device(unet, param_name, "cpu", value=param)
  File "accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
ValueError: Trying to set a tensor of shape torch.Size([320, 4, 3, 3]) in "weight" (which has shape torch.Size([320, 9, 3, 3])), this look incorrect.

Sign up or log in to comment