chujiezheng's picture
Update README.md
7c3942b verified
metadata
license: llama3
language:
  - en

Smaug-Llama-3-70B-Instruct-ExPO

The extrapolated (ExPO) model based on abacusai/Smaug-Llama-3-70B-Instruct and meta-llama/Meta-Llama-3-70B-Instruct, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.

Specifically, we obtain this model by extrapolating (alpha = 0.3) from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.