Edit model card

4.25 bpw/bits exl2 quantization of Venus-120b, using the measurement.json and dataset posted in the model page.

This size lets it being able to use CFG on 72GB VRAM. (use 20,21.5,21.5 for gpu split)

Original model card

Venus 120b - version 1.0

Overview

The goal was to create a large model that's highly capable for RP/ERP scenarios. Goliath-120b is excellent for roleplay, and Venus-120b was created with the idea of attempting to mix more than two models together to see how well this method works.

Model Details

Warning: This model will produce NSFW content!

Results

Initial tests show that Venus-120b functions fine, overall it seems to be comparable to Goliath-120b. Some differences I noticed:

  1. Venus needs lower temperature settings than Goliath. I recommend a temp of around 0.7, and no higher than 1.0.
  2. Venus tends to, on average, produce longer responses than Goliath. Probably due to the inclusion of SynthIA in the merge, which is trained to produce long chain-of-thought responses.
  3. Venus seems to be a bit less creative than Goliath when it comes to the prose it generates. Probably due to the lack of Xwin and the inclusion of Nous-Hermes.

Keep in mind this is all anecdotal from some basic tests. The key takeaway is that Venus shows that Goliath is not a fluke.

Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.