Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,10 @@ The Generalizable Reward Model (GRM) aims to enhance the generalization ability
|
|
11 |
|
12 |
Paper: [Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs](https://arxiv.org/abs/2406.10216).
|
13 |
|
14 |
-
|
|
|
|
|
|
|
15 |
|
16 |
This reward model is finetuned from [gemma-2b-it](https://huggingface.co/google/gemma-2b-it) using the [weqweasdas/preference_dataset_mixture2_and_safe_pku](https://huggingface.co/datasets/weqweasdas/preference_dataset_mixture2_and_safe_pku) dataset.
|
17 |
|
|
|
11 |
|
12 |
Paper: [Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs](https://arxiv.org/abs/2406.10216).
|
13 |
|
14 |
+
|
15 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d45451c34a346181b130dd/ieB57iMlZuK8zyTadW2M-.png)
|
16 |
+
|
17 |
+
The framework is shown above. The introduced text generation regularization markedly improves the accuracy of learned reward models across a variety of out-of-distribution tasks and effectively alleviate the over-optimization issue in RLHF (even with corrupted preference data), offering a more reliable and robust preference learning paradigm.
|
18 |
|
19 |
This reward model is finetuned from [gemma-2b-it](https://huggingface.co/google/gemma-2b-it) using the [weqweasdas/preference_dataset_mixture2_and_safe_pku](https://huggingface.co/datasets/weqweasdas/preference_dataset_mixture2_and_safe_pku) dataset.
|
20 |
|