Edit model card

This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process here.

It is a hierarchichal SLERP merge of teknium/OpenHermes-2.5-Mistral-7B, Intel/neural-chat-7b-v3-3, meta-math/MetaMath-Mistral-7B, and openchat/openchat-3.5-1210. berkeley-nest/Starling-LM-7B-alpha was omitted from this version of the model.

Downloads last month
48,960
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for OpenPipe/mistral-ft-optimized-1227

Finetuned
(4)
this model
Finetunes
1 model
Merges
19 models
Quantizations
6 models

Spaces using OpenPipe/mistral-ft-optimized-1227 15