GGUF Model not loading at all

#30
by jdhadljasnajd - opened

I am trying to use the Bloke's hugging face chat model, and I am repeatedly faced with the same error: OSError: Can't load tokenizer for 'TheBloke/Llama-2-7B-chat-GGUF'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'TheBloke/Llama-2-7B-chat-GGUF' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.

Can someone pls help me asap??

Same question

@cx819
this is not a huggingface model but a gguf model. It can be only used with llama.cpp or anything that uses it(ollama, llama-cpp-python, text generation webui)

Sign up or log in to comment