Didn't work for me.

#1
by gwong1963 - opened

I'm using a m2 macbook pro, and oobabooga. These are the errors I'm getting.
File "/Users/***/text-generation-webui/modules/ui_model_menu.py", line 231, in load_model_wrapper

shared.model, shared.tokenizer = load_model(selected_model, loader)

                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/***/text-generation-webui/modules/models.py", line 82, in load_model

metadata = get_model_metadata(model_name)

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/***/text-generation-webui/modules/models_settings.py", line 55, in get_model_metadata

metadata = metadata_gguf.load_metadata(model_file)

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/***/text-generation-webui/modules/metadata_gguf.py", line 69, in load_metadata

GGUF_MAGIC = struct.unpack("<I", file.read(4))[0]

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Well all the models pass the dummy check. Basically they start and generate few tokens. Try to update llama cpp python and try again

Oh I guess I ran out of memory. lowered the memory and it worked.
lowered n_ctx in model.

great! Have fun using the model! If you need more models, you can always search for them on my huggingface or ask me for quants in case they are not there!

Sign up or log in to comment