Error for LlamaForCausalLM.from_pretrained in HuggingFace

#2
by Selyam - opened

Invoking LlamaForCausalLM.from_pretrained for the model cause a error type:
[Err](OSError: Could not locate pytorch_model-00001-of-00006.bin inside
anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g.)

This may be due to pytorch_model.bin.index.json listing multiple shard containing weights of the model, inherited fromm Llama

same issue , did you find any solutions ?

Same here. Also wondering if there is a solution. I also tried using ', from_pt=True' just to see if it made a difference but still same error.
I am running this in colab.

I also encountered the same error. If anyone finds a solution, please post it here.

Sign up or log in to comment