Text Generation
Transformers
Safetensors
imp
custom_code

Fine tuning code?

#5
by Inje - opened

When can we expect model fine tuning code to be added to github?

Owner

Expect to be released in Feb.

Great to hear that! Thank you for the update.

I'm trying to finetune this model, Please share your pretrained checkpoints.
'./checkpoints/imp-v1-3b-pretrain/mm_projector.bin'

  • fine tune code on google colab
  • qlora
  • peft

?

example colab fine tune:

https://colab.research.google.com/drive/1Rg44ZVPf3_cs77UUXmOp_eMzGLoQtml0?usp=sharing

I have been doing some fine tune tests but I can't get it right, since I see that it is a little different or I don't know if I should place the 'tokenizer' in another way or add everything in one with the text, does anyone have another example than this working.

I don't know if this is correct:

image_tensor = model.image_preprocess(item["image"]) # current example

The code works to some extent but it doesn't cause the 'loss' to be calculated, the reverse does not exist in the loss.

I would appreciate the help.

thank you so much

@NickyNicky are you able to finetune?

I have not been able to fine tune this model, do you have any ideas?

Owner

Hi, @NickyNicky @Inje we have release the finetuning code on our github page. Currently, only the traditional fully ft or LoRA finetune are supported. If you have some inspiring results with QLoRA, please let us know.

MILVLG changed discussion status to closed

@MILVLG does this model supports QLORA fine tuning now ? are there any code available ?

Sign up or log in to comment