license: apache-2.0
Chinese-Alpaca-2-7B
This is the full Chinese-Alpaca-2-7B model,which can be loaded directly for inference and full-parameter training.
Related models👇
- Long context base models
- Base models
- Instruction/Chat models
Description of Chinese-LLaMA-Alpaca-2
This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.
The main contents of this project include:
- 🚀 New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.
- 🚀 Open-sourced the pre-training and instruction finetuning (SFT) scripts for further tuning on user's data
- 🚀 Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC
- 🚀 Support for LLaMA ecosystems like 🤗transformers, llama.cpp, text-generation-webui, LangChain, vLLM etc.
Please refer to https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/ for details.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 47.11 |
ARC (25-shot) | 49.57 |
HellaSwag (10-shot) | 72.62 |
MMLU (5-shot) | 46.5 |
TruthfulQA (0-shot) | 48.63 |
Winogrande (5-shot) | 70.01 |
GSM8K (5-shot) | 5.76 |
DROP (3-shot) | 36.66 |