Chupacabra-8x7B-experts
Mixtral-8x7b-experts merge, of Chupacabra, OpenChat, Falkor, and Starling-LM
Model Description
Special thanks to @cgt123 for his great work on https://github.com/cg123/mergekit.
This was made using the 'mixtral' branch on the mergekit repo.
#OneManArmy
More Info
- Developed by: Ray Hernandez
- Model type: Mistral
- Language(s) (NLP): English
- License: Apache 2.0
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 70.40 |
AI2 Reasoning Challenge (25-Shot) | 68.77 |
HellaSwag (10-Shot) | 86.11 |
MMLU (5-Shot) | 63.86 |
TruthfulQA (0-shot) | 63.50 |
Winogrande (5-shot) | 80.51 |
GSM8k (5-shot) | 59.67 |
- Downloads last month
- 1,371
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard68.770
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard86.110
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard63.860
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard63.500
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard80.510
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard59.670