Edit model card

Borealis

image/png

Borealis-10.7B is a 10.7B model made of 48 Mistral 7B layers, finetuned for +70h on 2xA6000 on a big RP and Conversational dataset with llama2 configuration of Axolotl, like SOLAR.

Next step would be to do a DPO train on top, but I don't know if it would be benefical.

Description

This repo contains fp16 files of Borealis-10.7B, a conversational model.

The goal of this model isn't to break all benchmark, but to have a better RP/ERP/Conversational model.

It was trained on multiple basic dataset to make it intelligent, but majority of the dataset was basic conversations.

Dataset used

  • NobodyExistsOnTheInternet/ToxicQAFinal
  • teknium/openhermes
  • unalignment/spicy-3.1
  • Doctor-Shotgun/no-robots-sharegpt
  • Undi95/toxic-dpo-v0.1-sharegpt
  • Aesir [1], [2], [3-SFW], [3-NSFW]
  • lemonilia/LimaRP
  • Squish42/bluemoon-fandom-1-1-rp-cleaned
  • Undi95/ConversationChronicles-sharegpt-SHARDED (2 sets, modified)

Prompt format: NsChatml

<|im_system|>
{sysprompt}<|im_end|>
<|im_user|>
{input}<|im_end|>
<|im_bot|>
{output}<|im_end|>

Others

If you want to support me, you can here.

Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Undi95/Borealis-10.7B

Quantizations
3 models

Collection including Undi95/Borealis-10.7B