Text Generation
Transformers
PyTorch
Japanese
japanese-stablelm
causal-lm
custom_code

Fine-Tuning the Model?

#4
by NekoMikoReimu - opened

Is there any information on how to fine-tune this model for specific use cases out there?
I'd like to try it but it feels like the use of the NovelAI tokenizer might pose challeneges that mean the usual off-the-shelf solution doesn't work.

Sign up or log in to comment