File size: 1,466 Bytes
51324bd 79a866c 51324bd 0cb5c5d 79a866c d61454f 79a866c d61454f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: cc-by-nc-4.0
language:
- en
tags:
- text-generation
datasets:
- stanford_alpaca
pipeline_tag: text-generation
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>LLM Generation models trained by Jina AI, Finetuner team.</b>
</p>
This repo contains the full weights (8bit) for Falcon-7b
fit on the [Code Alpaca](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) dataset.
## Reproduction
This version of the weights was trained with the following hyperparameters:
- Epochs: 6
- Batch size: 128
- Micro batch size: 8
- Learning rate: 3e-4
- Lora _r_: 8
- Lora target modules: query_key_value
You can reproduce using this repository:
https://github.com/jina-ai/jerboa
Make sure you install requirements and finetune using this command using the following command:
```
python finetune.py \
--base-model tiiuae/falcon-7b --lora-target-modules query_key_value \
--data-path sahil2801/CodeAlpaca-20k --output-dir ./lora-alpaca-code \
--batch-size 128 --micro-batch-size 8 --eval-limit 45 \
--eval-file code_eval.jsonl --wandb-project jerboa --wandb-log-model \
--wandb-watch gradients --num-epochs 6
``` |