File size: 729 Bytes
7a5c766 726312d 7a5c766 4ca86f3 7a5c766 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
datasets:
- PygmalionAI/PIPPA
- lemonilia/LimaRP
---
## Gen Settings & Prompting
https://rentry.org/tsukasamodel
## GPTQ
2048 sequence length
wikitext
## Training
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 8x nvidia a100 gpu cluster.
the a100 GPU cluster has been graciously provided by [lloorree](https://huggingface.co/lloorree).
rank 8 qlora (all modules) tune
base model alpindale/goliath-120b tuned on koishi commit 6e675d1 for one epoch
then tuned on pippa 6412b0c for one epoch (metharme completion)
then tuned on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-10-19 for 2 epochs in metharme completion format
|