Dataset used: mpasila/Literotica-stories-short which contains only a subset of the stories from the full Literotica dataset and was chunked down to fit within 8192 tokens.
Prompt format is: No formatting
LoRA: mpasila/Llama-3.1-Literotica-LoRA-8B
Trained with regular LoRA (not quantized/QLoRA) and LoRA rank was 128 and Alpha set to 32. Trained for 1 epoch using A40 for about 13 hours.
Uploaded model
- Developed by: mpasila
- License: Llama 3.1 Community License Agreement
- Finetuned from model : unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 29
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for mpasila/Llama-3.1-Literotica-8B
Base model
unsloth/Meta-Llama-3.1-8B