Training embadding Issues.

#8
by Imran1 - opened

Hello,

Its great work.

I am facing some issues.
I can train/fine tune embadding model using sentence transformer v3.
But this Alibaba-NLP/gte-Qwen2-7B-instruct are completely different.

If I want to use batch size 128 so it's crashed the GPU even with 4 batch size too.

Are you using peft and lora ?
Are you just training the embadding layer if using lora or peft?

Can you provide a basic code so it will help a lot. May be GitHub reproduce notebook.

Regard
Imran

Alibaba-NLP org

Thank you for your interest in the GTE model. Yes, we have used peft and lora to train the model.

could you please provide your Lora configuration? Thanks!

Sign up or log in to comment