Edit model card

zephyr_0.05

The DPO-trained model from alignment-handbook/zephyr-7b-sft-full using 5% data of HuggingFaceH4/ultrafeedback_binarized, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.

Downloads last month
2
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with chujiezheng/zephyr_0.05.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Collection including chujiezheng/zephyr_0.05