File size: 794 Bytes
7f98365 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
datasets:
- tatsu-lab/alpaca
library_name: peft
tags:
- facebook/opt-125m
- code
- instruct
- alpaca-instruct
- alpaca
---
We finetuned facebook/opt-125m on tatsu-lab/alpaca Dataset for 10 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 40 minutes and costed us only `$4` for the entire finetuning run!
#### Hyperparameters & Run details:
- Model: facebook/opt-125m
- Dataset: tatsu-lab/alpaca
- Learning rate: 0.0003
- Number of epochs: 10
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
-
---
license: apache-2.0
---
|