SHP / README.md
kawine's picture
Update README.md
534d986
|
raw
history blame
10.9 kB
metadata
license: mit
task_categories:
  - text-generation
tags:
  - human-feedback
  - rlhf
  - preferences
  - reddit
size_categories:
  - 100K<n<1M
language:
  - en

🚢 Stanford Human Preferences Dataset (SHP)

Summary

SHP is a dataset of 385K aggregate human preferences over Reddit comments in 18 different subject areas, from cooking to legal advice. It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.

Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users (in aggregate). SHP exploits the fact that if comment A was written after comment B but has a higher score nonetheless, then A is definitively more preferred to B. If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility from being written first.

How is SHP different from Anthropic's HH-RLHF dataset?

Dataset Input Output No. Domains Data Format
SHP Reddit post and comments Aggregate Preference Label 18 (cooking, cars, ...) Question/Answer + Assertion/Response
Anthropic/HH-RLHF Dialogue history with LLM Individual Preference Label 2 (harmful, helpful) Multi-turn Dialogue

Data Structure

There are 18 directories, one for each subreddit, and each directory contains a JSONL file for the training, validation, and test data. Here's how to get the data using Huggingface's datasets library:

from datasets import load_dataset

# Load all the data (share the same schema)
dataset = load_dataset("stanfordnlp/shp")

# Load one of the harmless subsets
dataset = load_dataset("stanfordnlp/shp", data_dir="askculinary")

Here's an example from askculinary/train.json:

{
    `post_id`:"qt3nxl",
    `domain`:"askculinary_train",
    `upvote_ratio`:0.98,
    `history`:"What's the best way to disassemble raspberries? Like this, but down to the individual seeds: https:\/\/i.imgur.com\/Z0c6ZKE.jpg  I've been pulling them apart with tweezers and it's really time consuming. I have about 10 pounds to get through this weekend.",
    `c_root_id_A`:"hkh25sc",
    `c_root_id_B`:"hkh25lp",
    `created_at_utc_A`:1636822112,
    `created_at_utc_B`:1636822110,
    `score_A`:340,
    `score_B`:166,
    `human_ref_A`:"Pectinex, perhaps?  It's an enzyme that breaks down cellulose. With citrus, you let it sit in a dilute solution of pectinex overnight to break down the connective tissues. You end up with perfect citrus supremes. If you let the raspberries sit for a shorter time, I wonder if it would separate the seeds the same way...?  Here's an example: https:\/\/www.chefsteps.com\/activities\/perfect-citrus-supreme",
    `human_ref_B`:"Raspberry juice will make a bright stain at first, but in a matter of weeks it will start to fade away to almost nothing. It is what is known in the natural dye world as a fugitive dye, it will fade even without washing or exposure to light. I hope she gets lots of nice photos of these stains on her dress, because soon that will be all she has left of them!",
    `labels`:1,
    `seconds_difference`:2.0,
    `score_ratio`:2.0481927711
}

where the fields are:

  • post_id: the ID of the Reddit post (string)
  • domain: the subreddit and split the example is drawn from, separated by an underscore (string)
  • upvote_ratio: the upvote ratio of the Reddit post (float)
  • history: the post title concatented to the post body (string)
  • c_root_id_A: the ID of comment A (string)
  • c_root_id_B: the ID of comment B (string)
  • created_at_utc_A: utc timestamp of when comment A was created (integer)
  • created_at_utc_B: utc timestamp of when comment B was created (integer)
  • score_A: score of comment A (integer)
  • score_B: score of comment B (integer)
  • human_ref_A: text of comment A (string)
  • human_ref_B: text of comment B (string)
  • labels: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
  • seconds_difference: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
  • score_ratio: the ratio score_A:score B (will be >= 2) (float)

Dataset Design

The data is sourced from Reddit, which is a public forum organized into topic-specific fora called subreddits. For example, the askculinary subreddit is where users ask cooking-related questions and are answered by other users.

Subreddit Selection

This may be due to the aggregate human preferences in SHP being more stable easier to predict than the individual human preferences in the Anthropic data, as well as our strict data filtering described above.

SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on:

  1. whether they were well-known (subscriber count >= 50K)
  2. whether they were actively moderated
  3. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., askscience vs. AskAmericans)

The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits. Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%:

subreddit train validation test total
askacademia 31450 2095 1708 35253
askanthropology 3910 203 268 4381
askbaking 44007 2096 1544 47647
askcarguys 3227 159 117 3503
askculinary 45710 2094 2563 50367
askdocs 6449 315 455 7219
askengineers 57096 3154 2638 62888
askhistorians 3264 113 164 3541
askhr 8295 641 395 9331
askphilosophy 10307 608 677 11592
askphysics 7364 409 587 8360
askscience 13316 899 977 15192
asksciencefiction 29382 1576 1987 32945
asksocialscience 2706 147 188 3041
askvet 3300 170 224 3694
changemyview 38173 1637 1836 41646
explainlikeimfive 19592 1014 1070 21676
legaladvice 21170 1106 1011 23287
ALL 348718 18436 18409 385563

Post and Comment Selection

Given a post P and two comments (A,B) we only included the preference A > B in the dataset if

  1. A was written no later than B and A has a higher score than B.
  2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18).
  3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
  4. The post P has a score >= 10 and each comment has a score >= 2 (upvoted at least once).

Reddit makes it very difficult to get anything beyond the top 1000 posts for subreddit. We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using the Reddit search function. By doing this recursively, we scraped up to 7500 post IDs for each subreddit and then used the AsyncPRAW API to scrape the top 50 comments from each post. We limited the scraping to 50 comments per post because the number of comments per post is Pareto-distributed, and we did not want a relatively small number of posts dominating the data.

Preprocessing

We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded ("CMV" to "Change my view that"). In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).

Building a Preference Model

Finetuning

If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:

  1. Use a sufficiently large model. With FLAN-T5-xl, you can get 65-85% percent accuracies depending on the subreddit.
  2. Do in-domain prediction. Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on askculinary preferences and test on askcarguys preferences).
  3. Preprocess the data. The total input length should fit under the model's token limit (usually 512 tokens). Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on the entire input. To avoid this, truncate the post text (in the history field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however). If this is still over 512 tokens, simply skip the example.
  4. Train for 1 epoch only, as the InstructGPT paper paper suggests. Since the same comment appears in multiple preferences, it is easy to overfit to the data.
  5. Train on less data. Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain score_ratio. The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.

Disclaimer

Although we filtered out posts with NSFW (over 18) content and chose an innocuous set of subreddits, some of the data may contain discriminatory or harmful language. The data does not reflect the views of the dataset creators. Please only engage with the data in accordance with your own personal risk tolerance.

Reddit users on these subreddits are also not necessarily representative of the broader population, which one should keep in mind before using any models trained on this data. As always, remember to evaluate!

Contact

Please contact kawin@stanford.edu if you have any questions about the data. This project is being maintained by Kawin Ethayarajh, Heidi (Chenyu) Zhang, and Yizhong Wang.