We release the long instruction-following dataset, LongAlpaca-12k and the corresponding models, LongAlpaca-7B, LongAlpaca-13B, and LongAlpaca-70B.
- (These sft models, Llama-2-13b-chat-longlora-32k-sft and Llama-2-70b-chat-longlora-32k-sft, have been depreciated.)
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 30.42 |
ARC (25-shot) | 26.54 |
HellaSwag (10-shot) | 26.1 |
MMLU (5-shot) | 23.12 |
TruthfulQA (0-shot) | 49.16 |
Winogrande (5-shot) | 64.33 |
GSM8K (5-shot) | 0.0 |
DROP (3-shot) | 23.71 |