LORA model or standard finetune?

#1
by ghogan42 - opened

Hi, this is just a quick question.
Is this model a 4-bit version of one of the lora trained versions like this one: https://huggingface.co/baseten/alpaca-30b
Or is this trained with standard fine-tuning/training scripts?

This is just Llama 30B merged with Chansung's 30B Alpaca Lora. It's working with the newest GPTQ. Quantized using --true-sequential and --act-order optimizations.

ghogan42 changed discussion status to closed

Sign up or log in to comment