I am trying to run the gguf q5 version of the model. I come across the following error when I load the model from local. But it works fine when I load it directly from huggingface hub.
RuntimeError: Failed to create LLM 'llama' from './Model/codellama-34B/codellama-34b.Q5_K_M.gguf'.