Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference

Training diverges when used with Llama 2 70B and 4-bit QLoRA

#10
by alyssavance - opened

Posted the issue here, but happy to discuss further if anyone can help. The divergence happens after ~20 steps/six hours. Thanks

https://github.com/TimDettmers/bitsandbytes/issues/663

Hi @alyssavance , have you read this? https://huggingface.co/togethercomputer/LLaMA-2-7B-32K/discussions/2

since you are doing QLoRA, you might need to set trust_remote_code=False to use HF's llama implementation, flash attention only works for float16 or bfloat16.

@gardner I did, I had some type problems but fixed them by removing the JIT decorator from rmsnorm. Right now it runs with no type errors, it does inference fine, it just gradually diverges after the first few dozen steps.

Together org

Hi @alyssavance , did you try a smaller learning rate? Instead of 1e-4, it might be worth to try out 2e-5 (same as in the linear interpolation paper).

Sign up or log in to comment