Similar to llama2-22b, but with BLOCK_DIAGONAL=false in the merge and twice the fine-tuning tokens.

Again, not intended for direct use - meant as a base for further tuning and merging.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 46.86
ARC (25-shot) 58.28
HellaSwag (10-shot) 82.69
MMLU (5-shot) 54.53
TruthfulQA (0-shot) 39.23
Winogrande (5-shot) 75.93
GSM8K (5-shot) 11.22
DROP (3-shot) 6.17
Downloads last month
1,298
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for chargoddard/llama2-22b-blocktriangular

Adapters
2 models

Dataset used to train chargoddard/llama2-22b-blocktriangular

Spaces using chargoddard/llama2-22b-blocktriangular 21

Collection including chargoddard/llama2-22b-blocktriangular