Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq

Error loading in text-generation-webui

#1
by Ransom - opened

Traceback (most recent call last): File “E:\llmRunner\textV2\oobabooga-windows\text-generation-webui\server.py”, line 69, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “E:\llmRunner\textV2\oobabooga-windows\text-generation-webui\modules\models.py”, line 102, in load_model tokenizer = load_tokenizer(model_name, model) File “E:\llmRunner\textV2\oobabooga-windows\text-generation-webui\modules\models.py”, line 127, in load_tokenizer tokenizer = LlamaTokenizer.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}/“), clean_up_tokenization_spaces=True) File “E:\llmRunner\textV2\oobabooga-windows\installer_files\env\lib\site-packages\transformers\tokenization_utils_base.py”, line 1812, in from_pretrained return cls.from_pretrained( File “E:\llmRunner\textV2\oobabooga-windows\installer_files\env\lib\site-packages\transformers\tokenization_utils_base.py”, line 1975, in from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File “E:\llmRunner\textV2\oobabooga-windows\installer_files\env\lib\site-packages\transformers\models\llama\tokenization_llama.py”, line 96, in init self.sp_model.Load(vocab_file) File "E:\llmRunner\textV2\oobabooga-windows\installer_files\env\lib\site-packages\sentencepiece_init.py", line 905, in Load return self.LoadFromFile(model_file) File "E:\llmRunner\textV2\oobabooga-windows\installer_files\env\lib\site-packages\sentencepiece_init.py”, line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) TypeError: not a string

Can anyone help?

Thanks

I get (IndexError: list index out of range)

Try updating your text-generation-webui to the latest version.

Sign up or log in to comment