The training script missing

#2
by jampekka - opened

The training script for v0.2 seems to be missing. In v0.1 does have train_unsloth_7b.py.

I'm trying to figure out what special tokenizations and instruction formats were used for the different datasources/tasks, but can't seem to find info about it.

Kiitos paljon, että saadaan suomenkielisiäkin malleja! 🔥🇫🇮

Finnish-NLP org

We are soon releasing new 3b and 7b models.
I will later add the finetuning scripts once those are published and once I have prepared the finetuning tutorial notebooks.
Unsloth which I have mainly used on my own finetuning trials unfortunately does not work on Google Colab for our upcoming 3b model so I will need to do it with peft.
But probably I will also share a finetuning sample with Unsloth which works on newer GPUs.

Sign up or log in to comment