zephyr-wizard-kuno-royale-BF16-merge-7B
This is an experimental merge of pre-trained language models created using mergekit. All source model weights are BF16, avoiding issues arising from mixed-precision merges.
Although the zephyr beta and WizardLM 2 7B models are touted as SOTA and can generate varied prose compared to base Mistral v0.1, their relatively mediocre benchmarks under GSM-8K suggests only average reasoning capability in one-shot narrative text completion. The kuno-royale-v2 model was selected for merger because of its higher GSM-8K rating.
Native prompt format is Alpaca, although at least one of the prior models was fine-tuned to Vicuna.
Tested lightly with ChatML instruct prompts, temperature 1, and minP 0.02.
- Full weights: grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
- GGUF quants: grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B-GGUF
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: grimjim/zephyr-beta-wizardLM-2-merge-7B
layer_range: [0,32]
- model: core-3/kuno-royale-v2-7b
layer_range: [0,32]
merge_method: slerp
base_model: grimjim/zephyr-beta-wizardLM-2-merge-7B
parameters:
t:
- value: 0.5
dtype: bfloat16
- Downloads last month
- 16
Model tree for grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
Spaces using grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B 6
Collection including grimjim/zephyr-wizard-kuno-royale-BF16-merge-7B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard68.690
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard86.870
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.870
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard65.470
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard80.030
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard63.310