Llama 7B lora fine-tune 2 epoch training on webNLG2017, 64 token length for both context and completion

#1
by Jojo567 - opened
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment