Zangs3011's picture
Update README.md
a48b295
|
raw
history blame
992 Bytes
---
datasets:
- ewof/code-alpaca-instruct-unfiltered
library_name: peft
tags:
- llama2-7b
- code
- instruct
- instruct-code
- code-alpaca
- alpaca-instruct
- alpaca
- llama7b
- gpt2
---
We finetuned Llama2-7B on Code-Alpaca-Instruct Dataset (ewof/code-alpaca-instruct-unfiltered) for 5 epochs or ~ 25,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 4 hours and costed us only `$16` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model Path: meta-llama/Llama-2-7b
- Dataset: ewof/code-alpaca-instruct-unfiltered
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
Loss metrics:
![training loss](train-loss.png "Training loss")
---
license: apache-2.0
---