File size: 601 Bytes
6ba5b1b
 
cb59684
 
6ba5b1b
 
 
 
 
 
7c3942b
1
2
3
4
5
6
7
8
9
10
11
---
license: llama3
language:
- en
---

# Smaug-Llama-3-70B-Instruct-ExPO

The extrapolated (ExPO) model based on [`abacusai/Smaug-Llama-3-70B-Instruct`](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct) and [`meta-llama/Meta-Llama-3-70B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.

Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.