Text Classification
Safetensors
English
llama

This is the base reward model ("LLaMA-2 RM") used in the paper "DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging".

The detailed training/evaluation information can be found at https://api.wandb.ai/links/merge_exp/g56s1tul.

For the detailed information about this model, please refer to our paper.

If you found this model useful, please cite our paper:

@article{lin2024dogerm,
  title={DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging},
  author={Lin, Tzu-Han and Li, Chen-An and Lee, Hung-yi and Chen, Yun-Nung},
  journal={arXiv preprint arXiv:2407.01470},
  year={2024}
}
Downloads last month
70
Safetensors
Model size
6.61B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for miulab/llama2-7b-ultrafeedback-rm

Finetuned
(767)
this model

Dataset used to train miulab/llama2-7b-ultrafeedback-rm

Collection including miulab/llama2-7b-ultrafeedback-rm