--- license: cc-by-nc-4.0 --- Buy Me A Coffee Merge of [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge. ### *Weights* - [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5 - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3 ### *Density* - [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5 - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5 # Quantizationed versions Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke). ##### GPTQ - [TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ) ##### GGUF - [TheBloke/Dolphin2.1-OpenOrca-7B-GGUF](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF) ##### AWQ - [TheBloke/Dolphin2.1-OpenOrca-7B-AWQ](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-AWQ) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) [📑 Detailed Results)](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Dolphin2.1-OpenOrca-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.0 | | ARC (25-shot) | 63.91 | | HellaSwag (10-shot) | 84.26 | | MMLU (5-shot) | 62.66 | | TruthfulQA (0-shot) | 53.84 | | Winogrande (5-shot) | 78.22 | | GSM8K (5-shot) | 19.94 | | DROP (3-shot) | 8.17 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Dolphin2.1-OpenOrca-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.0 | | ARC (25-shot) | 63.91 | | HellaSwag (10-shot) | 84.26 | | MMLU (5-shot) | 62.66 | | TruthfulQA (0-shot) | 53.84 | | Winogrande (5-shot) | 78.22 | | GSM8K (5-shot) | 19.94 | | DROP (3-shot) | 8.17 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Dolphin2.1-OpenOrca-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.0 | | ARC (25-shot) | 63.91 | | HellaSwag (10-shot) | 84.26 | | MMLU (5-shot) | 62.66 | | TruthfulQA (0-shot) | 53.84 | | Winogrande (5-shot) | 78.22 | | GSM8K (5-shot) | 19.94 | | DROP (3-shot) | 8.17 |