image/png

🧪 Just Another Model Experiment

This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

Mistral-Nemo-Prism-12B

Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.

The goal was to reduce archaic language and purple prose in a completely uncensored model.

Method

ORPO tuned with 2x A100 for 5 epochs.

The learning rate was lowered to 3e-6 for this version. In addition, a system prompt was introduced to further augment the prompts and encourage responses to match the data.

Downloads last month
70
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nbeerbower/Mistral-Nemo-Prism-12B-v2

Finetuned
(10)
this model
Merges
4 models
Quantizations
3 models

Datasets used to train nbeerbower/Mistral-Nemo-Prism-12B-v2