Quantization made by Richard Erkhov.
Llama-3.1-Instruct_NSFW-pretrained_e1-plus_reddit - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1-plus_reddit/
Original model description:
base_model: athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft
athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1 further pretrained on 1 epoch of the dirty stories from nothingiisreal/Reddit-Dirty-And-WritingPrompts, with all scores below 2 dropped.
Why do this? I have a niche use case where I cannot increase compute over 8b, and L3/3.1 are the only models in this size category that meet my needs for logic. However, both versions of L3/3.1 have the damn repetition/token overconfidence problem, and this is meant to disrupt that certainty without disrupting the model's ability to function.
By the way, I think it's the lm_head that is causing the looping, but it might be the embeddings being too separated. I'm not going to pay two more times to test them separately, however :p