Text Generation
Transformers
PyTorch
Safetensors
English
mistral
text-generation-inference
Inference Endpoints
Edit model card

Instruction Pre-Training: Language Models are Supervised Multitask Learners (EMNLP 2024)

This repo contains the general models pre-trained from scratch (on 100B tokens) in our paper Instruction Pre-Training: Language Models are Supervised Multitask Learners.

We explore supervised multitask pre-training by proposing Instruction Pre-Training, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of Instruction Pre-Training. Instruction Pre-Training* outperforms Vanilla Pre-training in both general pre-training from scratch and domain-adaptive continual pre-training. In pre-training from scratch, Instruction Pre-Training not only improves pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, Instruction Pre-Training enables Llama3-8B to be comparable to or even outperform Llama3-70B.

**************************** Updates ****************************

  • 2024/9/20: Our paper has been accepted by EMNLP 2024 main conference🎉
  • 2024/9/11: Updated FAQ on continual pre-training from Llama3
  • 2024/8/29: Updated guidelines on evaluating any 🤗Huggingface models on the domain-specific tasks
  • 2024/7/31: Updated pre-training suggestions in the Advanced Usage section of instruction-synthesizer
  • 2024/7/15: We scaled up the pre-trained tokens from 100B to 250B, with the number of synthesized instruction-response pairs reaching 500M. The performance trend on downstream tasks throughout the pre-training process:

  • 2024/6/21: Released the paper, code, and resources

Resources

🤗 We share our data and models with example usages, feel free to open any discussions at this page! 🤗

General Pre-Training From Scratch

We augment the RefinedWeb corproa with instruction-response pairs generated by our context-based instruction synthesizer to pre-train general langauge models from scratch.

To evaluate our general base model using the lm-evaluation-harness framework

  1. Setup dependencies:
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
  1. Evalaute:
MODEL=instruction-pretrain/InstructLM-500M
add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True

accelerate launch -m lm_eval --model hf \
    --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16  \
    --gen_kwargs do_sample=False \
    --tasks piqa,hellaswag,winogrande \
    --batch_size auto \
    --num_fewshot 0

accelerate launch -m lm_eval --model hf \
    --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
    --gen_kwargs do_sample=False \
    --tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \
    --batch_size auto \
    --num_fewshot 5

Citation

If you find our work helpful, please cite us:

Instruction Pre-Training (EMNLP 2024)

@article{cheng2024instruction,
  title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
  author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
  journal={arXiv preprint arXiv:2406.14491},
  year={2024}
}

Adapt LLM to Domains(ICLR 2024)

@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
Downloads last month
498
Safetensors
Model size
568M params
Tensor type
F32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for instruction-pretrain/InstructLM-500M

Finetunes
6 models
Quantizations
2 models

Datasets used to train instruction-pretrain/InstructLM-500M