LLaMA 3 8B Dirty Writing Prompts LORA

Testing with L3 8B Base

When alpha is left at default (64), it just acts like a r/DWP comment generator for a given prompt.

Testing with Stheno 3.3

When alpha is bumped to 256, it shows effects in the prompts we trained, lower alpha or out of scope prompts are unaffected.

When alpha is bumbed to 768, it always steers the conversation to be horny, and makes up excuses to create lewd scenarios.

This is completely emergent behaviour, we haven't trained for it, all we did was... read here in the model card

Downloads last month
59
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for nothingiisreal/llama3-8B-DWP-lora

Adapter
(269)
this model
Merges
3 models