Trained on a 3090. took 9 hours, it's 27s/it and default configured to 1218 iterations. commit: a2607fa - https://github.com/tloen/alpaca-lora
it's for the 7B model
Trained on a 3090. took 9 hours, it's 27s/it and default configured to 1218 iterations. commit: a2607fa - https://github.com/tloen/alpaca-lora
it's for the 7B model