Edit model card

SchizoGPT 8x22B

Fine tuned on r/ChatGPT Discord #general dump.
Merged into bfloat16 using v2ray/Mixtral-8x22B-v0.1 as base and v2ray/SchizoGPT-8x22B-QLoRA as QLoRA.

Note this is an experiment tune, it will most likely not get any updates for later knowledge cutoffs, use v2ray/SchizoGPT-8x7B-QLoRA if you want the most up-to-date version!

Prompt Template

Date: 2024/4<username>username1<message>message 1<message>message 2<username>username2<message>message 1<message>message 2<username>username3<message>

Date prefix is optional:

<username>username1<message>message 1<message>message 2<username>username2<message>message 1<message>message 2<username>username3<message>

Use @username to ping a user and #channel name to mention a channel.
Prepend <Re: username> before a message to respond to a user.
Use <filename.ext> to mention a file in a link, for example, if you have https://example.com/image.jpg, use <image.jpg>:

Date: 2023/12<username>example#0001<message>Hello!<username>example#0002<message><Re: example#0001> Hi, look at this image of a cat! <cat.png><username>example#0001<message><Re: example#0002>
Downloads last month
5
Safetensors
Model size
141B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for v2ray/SchizoGPT-8x22B

Finetunes
1 model
Quantizations
1 model

Dataset used to train v2ray/SchizoGPT-8x22B