latest peft makes model.save_pretrained in finetune.py save a 443 B adapter_model.bin which is clearly incorrect (normally adapter_model.bin should > 16 MB)
7dd8f96
unverified
zetavgcommited on
update requirements.lock.txt
3391607
unverified
zetavgcommited on
update requirements.lock.txt
19e630b
unverified
zetavgcommited on
try to lock requirements
69acddc
unverified
zetavgcommited on
fix
b9929ef
unverified
zetavgcommited on
Merge branch 'main' of github.com:zetavg/llama-lora
ea16ea2
unverified
zetavgcommited on
this should be set for training
7b14813
unverified
zetavgcommited on
Update README.md
bd0584c
unverified
Pokai Changcommited on
some fixes
c15d0e4
unverified
zetavgcommited on
update README.md
f31c4d4
unverified
zetavgcommited on
Merge branch 'dev-2'
490219f
unverified
zetavgcommited on
update README.md and LLaMA_LoRA.ipynb
d803879
unverified
zetavgcommited on
update
9279c83
unverified
zetavgcommited on
update finetune
8788753
unverified
zetavgcommited on
simulate actual behavior in ui dev mode
5929f2a
unverified
zetavgcommited on
update default branch to main
e55a664
unverified
zetavgcommited on
fix possible error
0c97f91
unverified
zetavgcommited on
fix “RuntimeError: expected scalar type Half but found Float” on lambdalabs and hf