**We release the long instruction-following dataset**, [LongAlpaca-12k](https://drive.google.com/file/d/1JVC1p_Ht-1h61tKitOCW0blnCHf-552U/view?usp=share_link) and **the corresponding models**, [LongAlpaca-7B](https://huggingface.co/Yukang/LongAlpaca-7B), [LongAlpaca-13B](https://huggingface.co/Yukang/LongAlpaca-13B), and [LongAlpaca-70B](https://huggingface.co/Yukang/LongAlpaca-70B). - (*These sft models*, [Llama-2-13b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) and [Llama-2-70b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft), *have been depreciated*.) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yukang__Llama-2-13b-chat-longlora-32k-sft) | Metric | Value | |-----------------------|---------------------------| | Avg. | 30.42 | | ARC (25-shot) | 26.54 | | HellaSwag (10-shot) | 26.1 | | MMLU (5-shot) | 23.12 | | TruthfulQA (0-shot) | 49.16 | | Winogrande (5-shot) | 64.33 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 23.71 |