Papers
arxiv:2310.05914

NEFTune: Noisy Embeddings Improve Instruction Finetuning

Published on Oct 9, 2023
Authors:
,
,
,
,
,
,

Abstract

We show that language model finetuning can be improved, sometimes dramatically, with a simple augmentation. NEFTune adds noise to the embedding vectors during training. Standard finetuning of LLaMA-2-7B using Alpaca achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings. NEFTune also improves over strong baselines on modern instruction datasets. Models trained with Evol-Instruct see a 10% improvement, with ShareGPT an 8% improvement, and with OpenPlatypus an 8% improvement. Even powerful models further refined with RLHF such as LLaMA-2-Chat benefit from additional training with NEFTune.

Community

Sign up or log in to comment

Models citing this paper 29

Browse 29 models citing this paper

Datasets citing this paper 14

Browse 14 datasets citing this paper

Spaces citing this paper 12

Collections including this paper 7