metadata
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
license: cc-by-nc-4.0
base_model:
- upstage/SOLAR-10.7B-Instruct-v1.0
Model Card for Model ID
Just testing out LLM Finetuning. Finetuned on upstage/SOLAR-10.7B-Instruct-v1.0 using argilla/distilabel-intel-orca-dpo-pairs. Followed the Google Colab mentioned in this article: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac