Sovenok-Hacker commited on
Commit
f0261cc
1 Parent(s): c39278f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,4 +9,4 @@ pipeline_tag: question-answering
9
 
10
  Minimal Alpaca-LORA trained with [databricks/databricks-dolly-v2-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and based on [OpenLLaMA-3B-600BT](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview).
11
 
12
- ### I have no powerful GPUs, so I am training it using Google Colab. I am working on Jupyter Notebook to train it and then I release it.
 
9
 
10
  Minimal Alpaca-LORA trained with [databricks/databricks-dolly-v2-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and based on [OpenLLaMA-3B-600BT](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview).
11
 
12
+ There is pretrained LoRA adapter and [Colab Jupyter notebook](https://colab.research.google.com/#fileId=https://huggingface.co/Sovenok-Hacker/openalpaca-3b/blob/main/finetune.ipynb) to finetune it (about 3 hours for 1 epoch on T4).