SHP-2 / README.md
ChenyuHeidiZhang
fix readme
628803d
---
task_categories:
- text-generation
- question-answering
tags:
- human feedback
- rlhf
- preferences
- reddit
- preference model
- RL
- NLG
- evaluation
size_categories:
- 1M<n<10M
language:
- en
---
# 🚢 Stanford Human Preferences Dataset v2 (SHP-2)
## Summary
SHP-2 is a dataset of **4.8M collective human preferences** over responses to questions/instructions in 129 different subject areas, from cooking to legal advice. It is an extended version of the original 385K [SHP dataset](https://huggingface.co/datasets/stanfordnlp/SHP).
The preferences are meant to reflect the helpfulness of one response over another, and are intended to be used for training RLHF reward models and NLG evaluation models (e.g., [SteamSHP](https://huggingface.co/stanfordnlp/SteamSHP-flan-t5-xl)).
Each example is a Reddit or StackExchange post with a question/instruction and a pair of top-level comments for that post, where one comment is more preferred by Reddit / StackExchange users (collectively).
SHP exploits the fact that if comment A was written *after* comment B but has a higher score nonetheless, then A is ostensibly more preferred to B.
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility.
We chose data where the preference label is intended to reflect which response is more *helpful* rather than which is less *harmful*, the latter being the focus of much past work.
How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) and [Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1)?
| Dataset | Size | Input | Label | Domains | Data Format | Length |
| -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
| SHP-2 | 4.8M | Naturally occurring human-written responses | Collective Human Preference | 129 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
| OASST | 161K | Dialogue with LLM | K Individual Preferences, Aggregated | not labelled | Live Chat (Multi-Turn) | up to 1.5K T5 tokens |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
SHP uses the timestamp information to infer preferences, while ELI5 only provides comments and scores -- the latter are not enough to infer preferences since comments made earlier tend to get higher scores from more visibility.
It also contains data from more domains:
| Dataset | Size | Comments + Scores | Preferences | Number of Domains |
| -------------------- | ---- | ------------------ | -------------| ------------------ |
| SHP-2 | 4.8M | Yes | Yes | 129 (70 from Reddit, 59 from StackExchange) |
| SHP | 385K | Yes | Yes | 18 (from Reddit) |
| ELI5 | 270K | Yes | No | 3 |
## Data Structure
There are 2 directories, one for Reddit and one for StackExchange. There are 70 subdirectories under `reddit/`, one for each subreddit, and 59 subdirectories under `stackexchange/`, one for each stackexchange site.
Each subdirectory contains a JSONL file for the training, validation, and test data.
Here's how to get the data using Huggingface's `datasets` library:
```python
from datasets import load_dataset
# Load all the data
dataset = load_dataset("stanfordnlp/shp-2")
# Load one of the subreddits
dataset = load_dataset("stanfordnlp/shp-2", data_dir="reddit/askculinary")
# Load one of the StackExchange sites
dataset = load_dataset("stanfordnlp/shp-2", data_dir="stackexchange/stack_academia")
```
Here's an example from `reddit/askculinary/train.json`:
```
{
`post_id`:"qt3nxl",
`domain`:"askculinary_train",
`upvote_ratio`:0.98,
`history`:"What's the best way to disassemble raspberries? Like this, but down to the individual seeds: https:\/\/i.imgur.com\/Z0c6ZKE.jpg I've been pulling them apart with tweezers and it's really time consuming. I have about 10 pounds to get through this weekend.",
`c_root_id_A`:"hkh25sc",
`c_root_id_B`:"hkh25lp",
`created_at_utc_A`:1636822112,
`created_at_utc_B`:1636822110,
`score_A`:340,
`score_B`:166,
`human_ref_A`:"Pectinex, perhaps? It's an enzyme that breaks down cellulose. With citrus, you let it sit in a dilute solution of pectinex overnight to break down the connective tissues. You end up with perfect citrus supremes. If you let the raspberries sit for a shorter time, I wonder if it would separate the seeds the same way...? Here's an example: https:\/\/www.chefsteps.com\/activities\/perfect-citrus-supreme",
`human_ref_B`:"Raspberry juice will make a bright stain at first, but in a matter of weeks it will start to fade away to almost nothing. It is what is known in the natural dye world as a fugitive dye, it will fade even without washing or exposure to light. I hope she gets lots of nice photos of these stains on her dress, because soon that will be all she has left of them!",
`labels`:1,
`metadata_A`: "",
`metadata_B`: "",
`seconds_difference`:2.0,
`score_ratio`:2.0481927711
}
```
Here's an example from `stackexchange/stack_academia/validation.json`:
```
{
`post_id`:"87393",
`domain`:"academia_validation",
`history`:"What to answer an author asking me if I reviewed his/her paper? <sep> Suppose I review someone's paper anonymously, the paper gets accepted, and a year or two later we meet e.g. in a social event and he/she asks me "did you review my paper?". What should I answer? There are several sub-questions here: Suppose the review was a good one, and the paper eventualy got accepted, so I do not mind telling that I was the reviewer. Is there any rule/norm prohibiting me from telling the truth? Suppose the review was not so good, so I do not want to reveal. What can I answer? If I just say "I am not allowed to tell you", this immediately reveals me... On the other hand, I do not want to lie. What options do I have?",
`c_root_id_A`:"87434",
`c_root_id_B`:"87453",
`created_at_utc_A`:1490989560,
`created_at_utc_B`:1491012608,
`score_A`:2,
`score_B`:5,
`human_ref_A`:"I am aware of at least one paper where a referee went out of cover (after the review process of course) and was explicitly mentioned in a later paper: <blockquote> X and Y thank Z, who as the anonymous referee was kind enough to point out the error (and later became non-anonymous). </blockquote> so it is sure fine to answer truthfully that yes you did review, but only if you wish of course (and most likely if you have been helpful and the authors of the paper responsive).",
`human_ref_B`:"Perhaps you should follow the example of Howard Percy Robertson (known as the 'R' in the famous FLRW, or Friedmann-Lematre-Robertson-Walker metric used in physical cosmology.) He was the referee of the famous Einstein-Rosen paper, which was rejected by Physical Review, prompting Einstein never to publish in Physical Review again. Einstein ignored the referee report, but months later, it seems, Robertson had a chance to talk to Einstein and may have helped convince him of the error of his ways. However, as far as we know, he never revealed to Einstein that he was the anonymous referee for Physical Review. It was not until 2005 I believe, long after the death of all participants, that Physical Review chose to disclose the referee's identity (http://physicstoday.scitation.org/doi/full/10.1063/1.2117822).",
`labels`:"0",
`metadata_A`:"Post URL: https://academia.stackexchange.com/questions/87393, Response URL: https://academia.stackexchange.com/questions/87434, Post author username: Erel Segal-Halevi, Post author profile: https://academia.stackexchange.com/users/787, Response author username: mts, Response author profile: https://academia.stackexchange.com/users/49583",
`metadata_B`:"Post URL: https://academia.stackexchange.com/questions/87393, Response URL: https://academia.stackexchange.com/questions/87453, Post author username: Erel Segal-Halevi, Post author profile: https://academia.stackexchange.com/users/787, Response author username: Viktor Toth, Response author profile: https://academia.stackexchange.com/users/7938",
`seconds_difference`:23048.0,
`score_ratio`:2.5,
}
```
where the fields are:
- ```post_id```: the ID of the Reddit post (string)
- ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string)
- ```upvote_ratio```: the percent of votes received by the post that were positive (aka upvotes), -1.0 for stackexchange as there is no such data (float)
- ```history```: the post title concatented to the post body (string)
- ```c_root_id_A```: the ID of comment A (string)
- ```c_root_id_B```: the ID of comment B (string)
- ```created_at_utc_A```: utc timestamp of when comment A was created (integer)
- ```created_at_utc_B```: utc timestamp of when comment B was created (integer)
- ```score_A```: (# positive votes - # negative votes + 1) received by comment A (integer)
- ```score_B```: (# positive votes - # negative votes + 1) received by comment B (integer)
- ```human_ref_A```: text of comment A (string)
- ```human_ref_B```: text of comment B (string)
- ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
- ```metadata_A```: metadata for stackexchange post and comment A (string)
- ```metadata_B```: metadata for stackexchange post and comment B (string)
- ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
- ```score_ratio```: the ratio of the more preferred comment's score to the less preferred comment's score (will be >= 1) (float)
## Dataset Design
### Domain Selection
The data is sourced from Reddit and StackExchange, which are both public forums organized into different domains.
SHP-2 contains a train, validation, and test split for comments scraped from each domain. We chose domains based on:
1. whether they were well-known (>= 100K subscribers for Reddit and >= 50K for StackExchange)
2. whether posts were expected to pose a question or instruction
3. whether responses were valued based on how *helpful* they were
4. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
The train/validation/test splits were created by splitting the post IDs of a domain in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%.
See below for a list of all domains:
Reddit: \
techsupport, asklinguistics, askscience, catadvice, campingandhiking, askphysics, espresso, botany, asksocialscience, askbaking, ultralight, legaladvice, hiking, webdev, askengineers, screenwriting, askhistorians, vegetarian, writing, diy, musictheory, camping, moviesuggestions, askeconomics, stocks, frugal, outoftheloop, booksuggestions, gamedev, linuxquestions, asknetsec, aviation, askacademia, asksciencefiction, askhr, explainlikeimfive, etymology, entrepreneur, cooking, puppy101, keto, crochet, smallbusiness, architecture, artfundamentals, sewing, zerowaste, changemyview, mechanicadvice, iwanttolearn, eatcheapandhealthy, askanthropology, askculinary, askphilosophy, tea, running, excel, homebrewing, solotravel, fishing, cookingforbeginners, homeautomation, ifyoulikeblank, travel, suggestmeabook, televisionsuggestions, sysadmin, askcarguys, askdocs, askvet
StackExchange: \
stack_unix, stack_android, stack_academia, stack_superuser, stack_tex, stack_photo, stack_datascience, stack_mechanics, stack_english, stack_askubuntu, stack_sharepoint, stack_workplace, stack_blender, stack_ethereum, stack_stats, stack_bitcoin, stack_gamedev, stack_raspberrypi, stack_arduino, stack_magento, stack_physics, stack_mathoverflow, stack_dsp, stack_movies, stack_crypto, stack_apple, stack_mathematica, stack_philosophy, stack_wordpress, stack_ux, stack_webmasters, stack_cs, stack_travel, stack_bicycles, stack_softwarerecs, stack_money, stack_ell, stack_scifi, stack_aviation, stack_math, stack_biology, stack_drupal, stack_diy, stack_security, stack_salesforce, stack_graphicdesign, stack_stackoverflow, stack_webapps, stack_cooking, stack_networkengineering, stack_dba, stack_puzzling, stack_serverfault, stack_codereview, stack_music, stack_codegolf, stack_electronics, stack_chemistry, stack_gis
### Data Selection
For Reddit, the score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
For Stackexchange, the score of a post/comment is 0 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
The value of a score is relative; in domains(posts) with more traffic, there will be more higher-scoring posts(comments).
Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure, which is why using timestamp information is essential when inferring preferences.
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
1. A was written *no later than* B and A has a higher score than B.
2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18). For Stackexchange, edited posts were permitted as long as they were edited prior to the writing of the comments.
3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
4. For Reddit, the post has a score >= 10 and each comment has a score >= 2 (upvoted at least once). For Stackexchange, the post has a score >= 5 and each comment has a non-zero score.
The conditions are laxer for StackExchange because it is more strictly moderataed than Reddit, allowing us to hit the same data quality with lower thresholds.
In particular, we allow negative-score comments from StackExchange because the negative scores are likely due to being inaccurat/misinformed rather than being toxic, and this provides a useful signal.
A post with `n` comments could have up to (`n` choose `2`) preferences in the data.
Since the number of comments per post is Pareto-distributed, to prevent a relatively small number of posts from dominating the Reddit data, we limited the scraping to 50 comments per post.
This means that each post could have up to (`50` choose `2`) comments in the dataset, though this is a much smaller number in practice, since all the criteria above need to be met.
No such criteria are imposed for StackExchange, since there are fewer comments per post.
### Reddit Preprocessing
We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded (e.g., "CMV" to "Change my view that").
In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
### Finetuning
If you want to finetune a model to predict human preferences (e.g., for NLG evaluation or an RLHF reward model), here are some helpful tips:
1. **Preprocess the data.** The total input length should fit under the model's token limit (usually 512 tokens).
Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on inputs over 512 tokens.
To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
If this is still over 512 tokens, simply skip the example.
2. **Use a sufficiently large model.**
Finetuning a single FLAN-T5-xl model across [the original 385K SHP training data](https://huggingface.co/datasets/stanfordnlp/SHP) should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits.
3. **Do in-domain prediction.** Out-of-domain performance will be poor if the domains are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
4. **Train for fewer epochs.** The InstructGPT paper paper suggests training a reward model for only 1 epoch.
Since the same comment appears in multiple preferences, it is easy to overfit to the data.
5. **Training on less data may help.**
Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
## Biases and Limitations
### Biases
Although we filtered out posts with NSFW (over 18) content, chose domains that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language.
The data does not reflect the views of the dataset creators.
Reddit and StackExchange users are also not representative of the broader population.
Although subreddit-specific demographic information is not available, Reddit users overall are disproportionately male and from developed, Western, and English-speaking countries ([Pew Research](https://www.pewresearch.org/internet/2013/07/03/6-of-online-adults-are-reddit-users/)).
This is likely also true of StackExchange users.
Please keep this in mind before using any models trained on this data.
### Limitations
The preference label in SHP is intended to reflect how *helpful* one response is relative to another, given an instruction/question.
SHP is not intended for use in harm-minimization, as it was not designed to include the toxic content that would be necessary to learn a good toxicity detector.
If you are looking for data where the preference label denotes less harm, we would recommend the harmfulness split of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf).
Another limitation is that the more preferred response in SHP is not necessarily the more factual one.
Though some comments do provide citations to justify their response, most do not.
There are exceptions to this, such as the `askhistorians` subreddit, which is heavily moderated and answers are expected to provide citations.
Note that the collective preference label in SHP is not necessarily what we would get if we asked users to independently vote on each comment before taking an unweighted sum.
This is because comment scores on Reddit are public and are known to influence user preferences; a high score increases the likelihood of getting more positive votes [(Muchnik et al., 2013)](https://pubmed.ncbi.nlm.nih.gov/23929980/).
Whether this "herding effect" temporarily or permanently shifts a user's preference is unclear.
Therefore, while SHP does reflect collective human preferences, models trained on SHP may not generalize to settings where individual preferences are aggregated differently (e.g., users vote independently without ever seeing the current comment score, users vote after conferring, etc.).
Thanks to Greg Stoddard for pointing this out.
## License
Last updated: 07/016/2023
### Reddit
The data was made by scraping publicly available data in accordance with the a historical version of [Reddit API Terms of Use](https://docs.google.com/a/reddit.com/forms/d/e/1FAIpQLSezNdDNK1-P8mspSbmtC2r86Ee9ZRbC66u929cG2GX0T9UMyw/viewform), without any direct communication or written agreements with Reddit.
According to the Terms of Use, "User Content" is owned by the users themselves -- not by Reddit -- and Reddit grants a "non-exclusive, non-transferable, non-sublicensable, and revocable license to copy and display the User Content".
At time of writing, Reddit grants "no other rights or licenses are granted or implied, including any right to use User Content for other purposes, such as for training a machine learning or artificial intelligence model, without the express permission of rightsholders in the applicable User Content."
However, the legality of training on publicly available data will depend on your jurisdiction (legal in Japan, for example).
Datasets made by scraping Reddit are widely used in the research community: for example, Facebook AI Research used data scraped from Reddit to make the [ELI5](https://huggingface.co/datasets/eli5#source-data) dataset in 2019, which was made available without a license.
Anthropic AI has also [attested to scraping Reddit](https://arxiv.org/pdf/2112.00861.pdf) for preferences using a different methodology, though this data was not made public.
We take no responsibility for and we do not expressly or implicitly endorse any downstream use of this dataset.
We reserve the right to modify the SHP dataset and this license at any point in the future.
### StackExchange
StackExchange data is made available under a [CC by-SA license](https://creativecommons.org/licenses/by-sa/4.0/).
## Contact
Please contact kawin@stanford.edu if you have any questions about the data.
This dataset was created by Kawin Ethayarajh, Heidi (Chenyu) Zhang, and Shabnam Behzad with advice from Dan Jurafsky and Yizhong Wang.
Kawin and Heidi prepared the Reddit datasets and trained the SteamSHP models.
Kawin and Shabnam prepared the StackExchange data.
Dan and Yizhong provide advice on dataset construction.
## Citation
We will have a paper out soon, but until then, please cite:
```
@InProceedings{pmlr-v162-ethayarajh22a,
title = {Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information},
author = {Ethayarajh, Kawin and Choi, Yejin and Swayamdipta, Swabha},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {5988--6008},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
}
```