LLaMA2-13B-Erebus-exl2

Original model: LLaMA2-13B-Erebus-v3
Model creator: KoboldAI

Quants

4.25bpw h8 (main)
4.65bpw-h8
5bpw-h8
6bpw-h8
8bpw-h8

Quantization notes

Quantization is made with Exllamav2 0.0.13p2 with the default dataset.
The model seems to be oriented for nsfw storywriting and not for chatting. I guess it's best suited to be used with KoboldAI with small max_new_tokens param.
Among many LLM apps KoboldAI probably is the best suited for writing stories because its inteface is a big text field that can be freely edited.
Other apps such as Text-Generation-WebUI can be used with it too but Notebook/Default tab in TGWUI is less convenient compared to KoboldAI.

Original model card

LLaMA2-13B-Erebus

Model description

This is the third generation of the original Shinen made by Mr. Seeker. The full dataset consists of 8 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.

Training procedure

LLaMA2-13B-Erebus was trained on 8x A6000 Ada GPU's for a single epoch. No special frameworks have been used.

Training data

The data can be divided in 8 different datasets:

  • Literotica (everything with 3.0/5 or higher)
  • Sexstories (everything with 70 or higher)
  • Dataset-G (private dataset of X-rated stories)
  • Doc's Lab (all stories)
  • Lushstories (Editor's pick)
  • Swinglifestyle (all stories)
  • Pike-v2 Dataset (novels with "adult" rating)
  • SoFurry (collection of various animals)

The dataset uses [Genre: <comma-separated list of genres>] for tagging.

The full dataset is 2.3B tokens in size.

Limitations and biases

Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). Warning: This model has a very strong NSFW bias!

Downloads last month
15
Inference Examples
Inference API (serverless) has been turned off for this model.