Fine tuning "gpt4-x-alpaca-13b-native-4bit-128g".

#52
by muzammil-eds - opened

Hi,
Can we fine-tune this model or not? I have tried loading it using:

model = LlamaForCausalLM.from_pretrained(
'anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g',
load_in_4bit=True,
torch_dtype=torch.float16,
device_map=device_map,
)

but it gave me error, the file formats are very different from original alpaca-13b, in this repo the files are in .pt format.
can anyone tell, how to load and fine tune this model?

Sign up or log in to comment