nanoalpaca-3b / README.md
Sovenok-Hacker's picture
Update README.md
4f689f3
metadata
license: gpl-3.0
datasets:
  - databricks/databricks-dolly-15k
language:
  - en
pipeline_tag: question-answering

Minimal Alpaca-LORA trained with databricks/databricks-dolly-v2-15k dataset and based on OpenLLaMA-3B-600BT.

There is a pre-trained LoRA adapter and a Colab Jupyter notebook for fine-tuning (about 3 hours for 1 epoch on T4).