gptq_model-4bit--1g.safetensors does not contain metadata.

#3
by kenM1 - opened

Does anyone know how do i fix this issue?
Im getting:
2023-06-14 12:21:02 WARNING:The safetensors archive passed at models\TheBloke_starchat-beta-GPTQ\gptq_model-4bit--1g.safetensors does not contain metadata. Make sure to save your model with the save_pretrained method. Defaulting to 'pt' metadata.
2023-06-14 12:21:07 WARNING:GPTBigCodeGPTQForCausalLM hasn't fused attention module yet, will skip inject fused attention.
2023-06-14 12:21:07 WARNING:GPTBigCodeGPTQForCausalLM hasn't fused mlp module yet, will skip inject fused mlp.
When i try to load the model i get:

Traceback (most recent call last): File “C:\Users\me\text-generation-webui\server.py”, line 70, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “C:\Users\me\text-generation-webui\modules\models.py”, line 94, in load_model output = load_func(model_name) File “C:\Users\me\text-generation-webui\modules\models.py”, line 296, in AutoGPTQ_loader return modules.AutoGPTQ_loader.load_quantized(model_name) File “C:\Users\me\text-generation-webui\modules\AutoGPTQ_loader.py”, line 60, in load_quantized model.embed_tokens = model.model.model.embed_tokens File “C:\Users\me\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\module.py”, line 1614, in getattr raise AttributeError(“‘{}’ object has no attribute ‘{}’”.format( AttributeError: ‘GPTBigCodeForCausalLM’ object has no attribute ‘model’

Any help would be appreciated.

This is a new bug in text-generation-webui

Until a proper fix is introduced, please follow the instructions at the end of this issue thread: https://github.com/oobabooga/text-generation-webui/issues/2655#issuecomment-1590895961

This is a new bug in text-generation-webui

Until a proper fix is introduced, please follow the instructions at the end of this issue thread: https://github.com/oobabooga/text-generation-webui/issues/2655#issuecomment-1590895961

Successfully loaded TheBloke_starchat-beta-GPTQ

Here's a easy guide for anyone looking for a way to find file name: AutoGPTQ_loader.py
My was located at: C:\Users\me\text-generation-webui\modules
this line is located at the bottom of the file:

# These lines fix the multimodal extension when used with AutoGPTQ
# if not hasattr(model, 'dtype'):
#     model.dtype = model.model.dtype

# if not hasattr(model, 'embed_tokens'):
#     model.embed_tokens = model.model.model.embed_tokens

# if not hasattr(model.model, 'embed_tokens'):
#     model.model.embed_tokens = model.model.model.embed_tokens

after its been edited i simply loaded the model and:

Successfully loaded TheBloke_starchat-beta-GPTQ

Thanks for the guide, kenM1.

Future users: note that the bug in text-generation-webui has now been fixed, so all that is required to get it to working now is to update to the latest version of text-generation-webui

Sign up or log in to comment