'LlamaCppModel' object has no attribute 'model'

#2
by DrNicefellow - opened

01:36:46-515173 ERROR Failed to load the model.
Traceback (most recent call last):
File "/workspace/text-generation-webui/modules/ui_model_menu.py", line 245, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/text-generation-webui/modules/models.py", line 94, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/text-generation-webui/modules/models.py", line 272, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/text-generation-webui/modules/llamacpp_model.py", line 103, in from_pretrained
result.model = Llama(**params)
^^^^^^^^^^^^^^^
File "/workspace/text-generation-webui/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama.py", line 358, in init
self._model = self._stack.enter_context(contextlib.closing(_LlamaModel(
^^^^^^^^^^^^
File "/workspace/text-generation-webui/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/_internals.py", line 54, in init
raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: models/gemma-2-27b-it-Q4_K_M.gguf

Exception ignored in: <function LlamaCppModel.__del__ at 0x7f1d67294c20>
Traceback (most recent call last):
File "/workspace/text-generation-webui/modules/llamacpp_model.py", line 58, in del
del self.model
^^^^^^^^^^
AttributeError: 'LlamaCppModel' object has no attribute 'model'

support for this model was officially added to llama.cpp ~1 hour ago at the time of writing, so you may need to update after llama-cpp-python updates

edit: i had a bad download

I'm getting the same error with the latest version of llama-cpp-python.

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma2'
llama_load_model_from_file: failed to load model

Same for me using llama.cpp:

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma2'  
llama_load_model_from_file: failed to load model

I've also checked-out on the b3259 branch/tag:

image.png

Same for me using llama.cpp:

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma2'  
llama_load_model_from_file: failed to load model

I've also checked-out on the b3259 branch/tag:

image.png

The llama.cpp package has changed, now you have to use ./llama-cli instead of the main binary.
Now it works correctly.

I ran the ./llama-cli getting same error.

Can you provide some instructions on how you are doing it?

I ran the ./llama-cli getting same error.

Do you have the up-to-date version of llama.cpp? The Gemma2 support was added few hours ago

I do. It doesn't work for me :(

Try to do git pull make clean and rebuild your llama.cpp. I think you're using the old version

Yeah it's working for me with llama.cpp ./llama-cli so something must be broken in your local set up D:

Sign up or log in to comment