Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
Edit model card

News

Our first data-centric LLM competition begins! Please visit the competition's official websites, FT-Data Ranker (1B Track, 7B Track), for more information.

Introduction

This is a reference LLM from Data-Juicer.

The model architecture is LLaMA2-7B and we built it upon the a pre-trained Chinese checkpoint from FlagAlpha. The model is fine-trained on 52k Chinese chat samples of Data-Juicer's refined alpaca-CoT data. It beats LLaMA2-7B fine-tuned on 543k Belle samples in GPT-4 evaluation.

For more details, please refer to our paper.

exp_llama

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train datajuicer/LLaMA2-7B-ZH-Chat-52k