--- language: - en license: llama3 --- # Llama-3-Instruct-8B-SimPO-ExPO The extrapolated (ExPO) model based on [`princeton-nlp/Mistral-7B-Instruct-SimPO`](https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final) and [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper. Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.