--- license: gpl-3.0 datasets: - databricks/databricks-dolly-15k language: - en pipeline_tag: question-answering --- Minimal Alpaca-LORA trained with [databricks/databricks-dolly-v2-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and based on [OpenLLaMA-3B-600BT](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview). There is a pre-trained LoRA adapter and a [Colab Jupyter notebook](https://colab.research.google.com/#fileId=https://huggingface.co/Sovenok-Hacker/openalpaca-3b/blob/main/finetune.ipynb) for fine-tuning (about 3 hours for 1 epoch on T4).