GamerUntouch's picture
Update README.md
236a5c1
|
raw
history blame
432 Bytes
---
license: other
---
See LICENSE file for license.
This is a collection of merged, then converted to 4bit LLaMA models trained on the storytelling dataset I used for the storytelling LoRAs.
Unlike the LoRAs, some formatting oddness seems to have broken through.
Triple newlines tend to start new chapters which can break flow.
A 30B model was converted too, but formatting is very broken in that case, for some unknown reason.