Edit model card

This is my very first finetuned model that i have created for a recipe generating AI model based from ingredients inputted by the user.

However, I found that using a big dataset to finetune a small model like tinyllama to result in bad inference performance. The result using tinyllama seems to be unsatisfactory at the current time.

I will be looking further into improving the performance of small models finetuned for cooking recipes. As there are many benefits to using a small model.

The basis of the model is tinyllama, finetuned with the recipe_nlg dataset which is further filtered down to include most of the best recipes as the data.

There are many versions of the model as I experiment with a lot of the hyperparameters. The name of the model indicates which parameters that was used when training for that model. The parameters starting from Lora rank, alpha, training steps, and lastly the learning rate.

There are many intricacies to the model based on which parameters i adjusted. So not all model has the same consistency or performance. Some version of the model performed much better than the others etc.

The best way to use this model includes a prompt of "Create a detailed step by step cooking recipe instructions based on these ingredients". And then inputting what items you would like to include in said recipe.

Any feedback is welcomed. You can contact me on discord under the discord username _moemoe_

Downloads last month
229
GGUF
Model size
1.1B params
Architecture
llama
Unable to determine this model's library. Check the docs .

Dataset used to train moemoe101/tinyllama-AISmartRecipe-Lite