Edit model card

Zephyr-7B-DICE-Iter2

This model was developed using Bootstrapping Language Models with DPO Implicit Rewards (DICE) at iteration 2, based on the HuggingFaceH4/zephyr-7b-beta as the starting point.

Links to Other Models

Model Description

  • Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets.
  • Language(s) (NLP): Primarily English
  • License: MIT
  • Fine-tuned from model: HuggingFaceH4/zephyr-7b-beta

AlpacaEval Leaderboard Evaluation Results

Model LC. Win Rate Win Rate
Zephyr-7b-beta 12.69 10.71
Zephyr-7B-DICE-Iter1 19.03 17.67
Zephyr-7B-DICE-Iter2 20.71 20.16

Citation

@article{chen2024bootstrapping,
  title={Bootstrapping Language Models with DPO Implicit Rewards},
  author={Chen, Changyu and Liu, Zichen and Du, Chao and Pang, Tianyu and Liu, Qian and Sinha, Arunesh and Varakantham, Pradeep and Lin, Min},
  journal={arXiv preprint arXiv:2406.09760},
  year={2024}
}
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train sail/Zephyr-7B-DICE-Iter2

Collection including sail/Zephyr-7B-DICE-Iter2