News
Our first data-centric LLM competition begins! Please visit the competition's official websites, FT-Data Ranker (1B Track, 7B Track), for more information.
Introduction
This is a reference LLM from Data-Juicer.
The model architecture is LLaMA2-7B and we built it upon the a pre-trained Chinese checkpoint from FlagAlpha. The model is fine-trained on 52k Chinese chat samples of Data-Juicer's refined alpaca-CoT data. It beats LLaMA2-7B fine-tuned on 543k Belle samples in GPT-4 evaluation.
For more details, please refer to our paper.
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.