Edit model card

Chupacabra-8x7B-experts

Mixtral-8x7b-experts merge, of Chupacabra, OpenChat, Falkor, and Starling-LM

Model Description

Special thanks to @cgt123 for his great work on https://github.com/cg123/mergekit.

This was made using the 'mixtral' branch on the mergekit repo.

#OneManArmy

More Info

  • Developed by: Ray Hernandez
  • Model type: Mistral
  • Language(s) (NLP): English
  • License: Apache 2.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 70.40
AI2 Reasoning Challenge (25-Shot) 68.77
HellaSwag (10-Shot) 86.11
MMLU (5-Shot) 63.86
TruthfulQA (0-shot) 63.50
Winogrande (5-shot) 80.51
GSM8k (5-shot) 59.67
Downloads last month
1,371
Safetensors
Model size
46.7B params
Tensor type
BF16
·
FP16
·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Evaluation results