zephyr_0.2 / README.md
chujiezheng's picture
Create README.md
27a2c8a verified
metadata
license: apache-2.0
language:
  - en

zephyr_0.2

The DPO-trained model from alignment-handbook/zephyr-7b-sft-full using 20% data of HuggingFaceH4/ultrafeedback_binarized, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.