pretraining Gemma for domain dataset

#41
by Iamexperimenting - opened

Hi team,

I would like to pretrain Gemma model with my domain dataset. I wanted to train gemma model with my domain data. But I wanted to train all the parameters not using LoRA.

1.a : does tokenizer learns/add any new tokens(domain specific words which is not present in the training tokenizer) during continued pre-training?

can you please provide any example article to fine-tune full parameters?

@ybelkada @suryabhupa

@suryabhupa @ybelkada can you please provide an example?

Google org

Hello! Sorry for the delay.

  1. I'm not sure what you mean by new tokens; you shouldn't need to use any new tokens when finetuning, and you are welcome to use any formatting template you'd like; see our own formatting we use if you'd like, especially as those are natively supported by our tokenizer.

  2. The Zephyr 7B published their finetuning setup here: https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1, and other guides such as https://ai.google.dev/gemma/docs/jax_finetune, https://lightning.ai/lightning-ai/studios/understanding-using-and-finetuning-gemma, https://www.kaggle.com/code/lucamassaron/fine-tune-gemma-7b-it-for-sentiment-analysis exist as well.

Iamexperimenting changed discussion title from domain specific fine-tuning to pretraining Gemma for domain dataset

@suryabhupa @ybelkada
token which I was referring to "domain specific word(technical word) which is not present in the training tokenizer".

I think, I have used the wrong term in the title and above description. Now, I have changed it.

Basically, I wanted to train the Gemma model with my domain data(like I want to continue to train the Gemma model with my domain data). I wanted to train all parameters in the Gemma model.

Like Google team trained the base Gemma model. I would like to take the base Gemma model and continue to train it with my domain data.

Did you use delimiter <bos> and <eos> during training the base Gemma model?

@suryabhupa @ybelkada can you please guide here?

@suryabhupa @ybelkada can you please guide here?

Hi @Iamexperimenting
Thanks ! I will let @suryabhupa reply here whenever he can as I am not familiar with the Gemma training procedure

Google org

Hello! Yes, when constructing batches, I'd recommend having sequences in your pretraining set-up that have and tokens in the right places to delimit sequences, but also to properly construct the attention masks. We also use and tokens when doing so. You should experiment with how exactly you pack your examples into a single batch, I'd recommend checking out the T5, GPT, or PaLM paper for some details on how they did it. You do not need to add any extra tokens here.

@suryabhupa : Based on my understanding GPT 2 inserts <|endoftext|> between documents and then chunk it to have same size (Please correct me here if otherwise). Curious to know how <bos> and <eos> has been set up for Gemma. Lets say seq_len is 3 and we have two documents:

  • Hello India
  • Hello how are you doing

Will it be done like this

  • First add special tokens to entire text corpus like this: <bos> Hello India <eos> <bos> Hello how are you doing <eos>
  • Split it based on seq len :
    • <bos> Hello India
    • <eos> <bos> Hello
    • how are you
    • doing <eos>

Sign up or log in to comment