kawine commited on
Commit
534d986
1 Parent(s): 2d3ad7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -48
README.md CHANGED
@@ -33,7 +33,20 @@ How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/d
33
 
34
  ## Data Structure
35
 
36
- Here's an example from the `askculinary` training data:
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ```
38
  {
39
  `post_id`:"qt3nxl",
@@ -68,7 +81,7 @@ where the fields are:
68
  - ```human_ref_A```: text of comment A (string)
69
  - ```human_ref_B```: text of comment B (string)
70
  - ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
71
- - ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be positive) (integer)
72
  - ```score_ratio```: the ratio score_A:score B (will be >= 2) (float)
73
 
74
 
@@ -111,36 +124,48 @@ Since different posts have different numbers of comments, the number of preferen
111
  | legaladvice | 21170 | 1106 | 1011 | 23287 |
112
  | ALL | 348718 | 18436 | 18409 | 385563 |
113
 
 
114
 
115
- ###
116
-
117
- of
118
- The input in SHP contains more [FLANT5-usable information](https://icml.cc/virtual/2022/oral/16634) about the preference label than in
119
-
120
- Specifically, given a post P and two comments (A,B) we only included the preference A > B in the dataset if
121
- 1. A was written *no later than* B.
122
- 2. Despite being written later, A has a score that is at least 2 times as high as B's.
123
- 3. Both comments have a score >= 2 and the post has a score >= 10.
124
- 4. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18).
125
- 5. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
126
-
127
- Since comments made earlier get more visibility, the first condition is needed to ensure that A's higher score is not the result of a first-mover advantage.
128
- Since the comment score is also a noisy estimate of the comment's utility, the second and third conditions were enforced to ensure that the preference is genuine.
129
-
130
 
 
 
 
 
131
 
132
- ## Files
133
 
 
 
134
 
135
 
 
136
 
 
137
 
 
138
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
 
141
  ## Disclaimer
142
 
143
- Although we filtered out posts with NSFW (over 18) content, some of the data may contain discriminatory or harmful language.
144
  The data does not reflect the views of the dataset creators.
145
  Please only engage with the data in accordance with your own personal risk tolerance.
146
 
@@ -148,34 +173,7 @@ Reddit users on these subreddits are also not necessarily representative of the
148
  As always, remember to evaluate!
149
 
150
 
151
- ## FAQs
152
-
153
- **Q**: *I'm trying to train a FLAN-T5/T5 model on these preferences, but the loss won't converge. Help!*
154
-
155
- **A**: The most likely problem is that you're feeding the post text AND one or both comments as input, which is a lot larger than the 512 tokens these models can support.
156
- Even though they use relative position embeddings, in our experience, this is not helpful when training a preference/reward model on this data.
157
- To avoid this, truncate the post text as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however). If this is still over 512 tokens, simply skip the example.
158
- This should allow you to still train on most of the examples and get a preference model that is still ~75% accurate at predicting human preferencess.
159
- We are currently training a preference model on this data and will make it available shortly.
160
-
161
- **Q**: *Why did you use threshold the score ratio rather than the score difference when filtering preferences?*
162
-
163
- **A**: Some Reddit posts get far less traffic than others, which means their comments have lower absolute scores.
164
- An absolute difference threshold would disproportionately exclude comments from these posts, a kind of bias that we didn't want to introduce.
165
-
166
- **Q**: *Did you scrape every post on those 18 subreddits?*
167
-
168
- **A**: No. Reddit makes it very difficult to get anything beyond the top 1000 posts.
169
- We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using the Reddit search function.
170
- By doing this recursively, we scraped up to 7500 post IDs for each subreddit and then used the AsyncPRAW API to scrape the top 50 comments from each post.
171
- We limited the scraping to 50 comments per post because the number of comments per post is Pareto-distributed, and we did not want a relatively small number of posts dominating the data.
172
-
173
- **Q**: *How did you preprocess the text?*
174
-
175
- **A**: We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded ("CMV" to "Change my view that").
176
- In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
177
-
178
-
179
  ## Contact
180
 
181
- Please contact kawin@stanford.edu if you have any questions about the data.
 
 
33
 
34
  ## Data Structure
35
 
36
+ There are 18 directories, one for each subreddit, and each directory contains a JSONL file for the training, validation, and test data.
37
+ Here's how to get the data using Huggingface's `datasets` library:
38
+
39
+ ```python
40
+ from datasets import load_dataset
41
+
42
+ # Load all the data (share the same schema)
43
+ dataset = load_dataset("stanfordnlp/shp")
44
+
45
+ # Load one of the harmless subsets
46
+ dataset = load_dataset("stanfordnlp/shp", data_dir="askculinary")
47
+ ```
48
+
49
+ Here's an example from `askculinary`/train.json:
50
  ```
51
  {
52
  `post_id`:"qt3nxl",
 
81
  - ```human_ref_A```: text of comment A (string)
82
  - ```human_ref_B```: text of comment B (string)
83
  - ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
84
+ - ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
85
  - ```score_ratio```: the ratio score_A:score B (will be >= 2) (float)
86
 
87
 
 
124
  | legaladvice | 21170 | 1106 | 1011 | 23287 |
125
  | ALL | 348718 | 18436 | 18409 | 385563 |
126
 
127
+ ### Post and Comment Selection
128
 
129
+ Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
130
+ 1. A was written *no later than* B and A has a higher score than B.
131
+ 2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18).
132
+ 3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
133
+ 4. The post P has a score >= 10 and each comment has a score >= 2 (upvoted at least once).
 
 
 
 
 
 
 
 
 
 
134
 
135
+ Reddit makes it very difficult to get anything beyond the top 1000 posts for subreddit.
136
+ We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using the Reddit search function.
137
+ By doing this recursively, we scraped up to 7500 post IDs for each subreddit and then used the AsyncPRAW API to scrape the top 50 comments from each post.
138
+ We limited the scraping to 50 comments per post because the number of comments per post is Pareto-distributed, and we did not want a relatively small number of posts dominating the data.
139
 
140
+ ### Preprocessing
141
 
142
+ We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded ("CMV" to "Change my view that").
143
+ In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
144
 
145
 
146
+ ## Building a Preference Model
147
 
148
+ ### Finetuning
149
 
150
+ If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
151
 
152
+ 1. **Use a sufficiently large model.** With FLAN-T5-xl, you can get 65-85% percent accuracies depending on the subreddit.
153
+ 2. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
154
+ 3. **Preprocess the data**. The total input length should fit under the model's token limit (usually 512 tokens).
155
+ Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on the entire input.
156
+ To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
157
+ If this is still over 512 tokens, simply skip the example.
158
+ 5. **Train for 1 epoch only**, as the [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests.
159
+ Since the same comment appears in multiple preferences, it is easy to overfit to the data.
160
+ 6. **Train on less data.**
161
+ Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
162
+ The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
163
+
164
 
165
 
166
  ## Disclaimer
167
 
168
+ Although we filtered out posts with NSFW (over 18) content and chose an innocuous set of subreddits, some of the data may contain discriminatory or harmful language.
169
  The data does not reflect the views of the dataset creators.
170
  Please only engage with the data in accordance with your own personal risk tolerance.
171
 
 
173
  As always, remember to evaluate!
174
 
175
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
  ## Contact
177
 
178
+ Please contact kawin@stanford.edu if you have any questions about the data.
179
+ This project is being maintained by Kawin Ethayarajh, Heidi (Chenyu) Zhang, and Yizhong Wang.