Testing Might be broken
Collection
testing only models,
•
10 items
•
Updated
•
2
Another trial of merging models with different sizes, still under testing, should be more stable, but I have no ideia if it's improving or degrading the base model.
In this I changed something, to have more Westlake. Recipe:
merge_method: task_anysize
base_model: princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT
models:
- model: senseable/WestLake-7B-v2
parameters:
weight: 1.0
dtype: bfloat16
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 36.31 |
AI2 Reasoning Challenge (25-Shot) | 34.04 |
HellaSwag (10-Shot) | 58.05 |
MMLU (5-Shot) | 26.24 |
TruthfulQA (0-shot) | 42.64 |
Winogrande (5-shot) | 56.91 |
GSM8k (5-shot) | 0.00 |