Text Classification
Safetensors
English
llama

This is the base reward model ("LLaMA-2 RM") used in the paper "DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging".

The detailed training/evaluation information can be found at https://api.wandb.ai/links/merge_exp/g56s1tul.

For the detailed information about this model, please refer to our paper.

If you found this model useful, please cite our paper:

@article{lin2024dogerm,
  title={DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging},
  author={Lin, Tzu-Han and Li, Chen-An and Lee, Hung-yi and Chen, Yun-Nung},
  journal={arXiv preprint arXiv:2407.01470},
  year={2024}
}
Downloads last month
122
Safetensors
Model size
6.61B params
Tensor type
FP16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for miulab/llama2-7b-ultrafeedback-rm

Finetuned
(619)
this model

Dataset used to train miulab/llama2-7b-ultrafeedback-rm

Collection including miulab/llama2-7b-ultrafeedback-rm