kawine commited on
Commit
05a9d3a
1 Parent(s): 37e262b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -162,10 +162,18 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
162
  If this is still over 512 tokens, simply skip the example.
163
  4. **Train for 1 epoch only**, as the [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests.
164
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
165
- 5. **Train on less data.**
166
  Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
167
  The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
168
 
 
 
 
 
 
 
 
 
169
 
170
 
171
  ## Disclaimer
 
162
  If this is still over 512 tokens, simply skip the example.
163
  4. **Train for 1 epoch only**, as the [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests.
164
  Since the same comment appears in multiple preferences, it is easy to overfit to the data.
165
+ 5. **Training on less data may help.**
166
  Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
167
  The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
168
 
169
+ ### Evaluating
170
+
171
+ Since it is easier to predict stronger preferences than weaker ones (e.g., preferences with a big difference in comment score), we recommend reporting a performance curve instead of a single number.
172
+ For example, here is the accuracy curve for a FLAN-T5-xl model trained using the suggestions above, on only preferences with a 2+ score ratio and using no more than 5 preferences from each post to prevent overfitting:
173
+
174
+
175
+
176
+
177
 
178
 
179
  ## Disclaimer