Update README.md
Browse files
README.md
CHANGED
@@ -29,13 +29,13 @@ SHP exploits the fact that if comment A was written *after* comment B but has a
|
|
29 |
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility.
|
30 |
We chose data where the preference label is intended to reflect which response is more *helpful* rather than which is less *harmful*, the latter being the focus of much past work.
|
31 |
|
32 |
-
How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf)?
|
33 |
-
Most notably, all the data in SHP is naturally occurring and human-written, whereas the responses in HH-RLHF are machine-written, giving us two very different distributions that can complement each other.
|
34 |
|
35 |
| Dataset | Size | Input | Label | Domains | Data Format | Length |
|
36 |
| -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
|
37 |
| SHP-2 | 4.8M | Naturally occurring human-written responses | Collective Human Preference | 129 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens |
|
38 |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
|
|
|
39 |
|
40 |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
|
41 |
SHP uses the timestamp information to infer preferences, while ELI5 only provides comments and scores -- the latter are not enough to infer preferences since comments made earlier tend to get higher scores from more visibility.
|
@@ -88,23 +88,6 @@ Here's an example from `reddit/askculinary/train.json`:
|
|
88 |
}
|
89 |
```
|
90 |
|
91 |
-
where the fields are:
|
92 |
-
- ```post_id```: the ID of the Reddit post (string)
|
93 |
-
- ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string)
|
94 |
-
- ```upvote_ratio```: the percent of votes received by the post that were positive (aka upvotes) (float)
|
95 |
-
- ```history```: the post title concatented to the post body (string)
|
96 |
-
- ```c_root_id_A```: the ID of comment A (string)
|
97 |
-
- ```c_root_id_B```: the ID of comment B (string)
|
98 |
-
- ```created_at_utc_A```: utc timestamp of when comment A was created (integer)
|
99 |
-
- ```created_at_utc_B```: utc timestamp of when comment B was created (integer)
|
100 |
-
- ```score_A```: (# positive votes - # negative votes + 1) received by comment A (integer)
|
101 |
-
- ```score_B```: (# positive votes - # negative votes + 1) received by comment B (integer)
|
102 |
-
- ```human_ref_A```: text of comment A (string)
|
103 |
-
- ```human_ref_B```: text of comment B (string)
|
104 |
-
- ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
|
105 |
-
- ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
|
106 |
-
- ```score_ratio```: the ratio of the more preferred comment's score to the less preferred comment's score (will be >= 1) (float)
|
107 |
-
|
108 |
Here's an example from `stackexchange/stack_academia/validation.json`:
|
109 |
```
|
110 |
{
|
@@ -126,23 +109,40 @@ Here's an example from `stackexchange/stack_academia/validation.json`:
|
|
126 |
}
|
127 |
```
|
128 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
## Dataset Design
|
130 |
|
131 |
### Domain Selection
|
132 |
-
|
133 |
-
|
134 |
-
The data is sourced from Reddit and StackExchange, which are both public forums organized into different sub-domains.
|
135 |
|
136 |
-
SHP-2 contains a train, validation, and test split for comments scraped from each
|
137 |
-
1. whether they were well-known (
|
138 |
2. whether posts were expected to pose a question or instruction
|
139 |
3. whether responses were valued based on how *helpful* they were
|
140 |
4. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
|
141 |
|
142 |
-
The train/validation/test splits were created by splitting the post IDs of a
|
143 |
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%.
|
144 |
|
145 |
-
See below for a list of all
|
146 |
|
147 |
Reddit: \
|
148 |
techsupport, asklinguistics, askscience, catadvice, campingandhiking, askphysics, espresso, botany, asksocialscience, askbaking, ultralight, legaladvice, hiking, webdev, askengineers, screenwriting, askhistorians, vegetarian, writing, diy, musictheory, camping, moviesuggestions, askeconomics, stocks, frugal, outoftheloop, booksuggestions, gamedev, linuxquestions, asknetsec, aviation, askacademia, asksciencefiction, askhr, explainlikeimfive, etymology, entrepreneur, cooking, puppy101, keto, crochet, smallbusiness, architecture, artfundamentals, sewing, zerowaste, changemyview, mechanicadvice, iwanttolearn, eatcheapandhealthy, askanthropology, askculinary, askphilosophy, tea, running, excel, homebrewing, solotravel, fishing, cookingforbeginners, homeautomation, ifyoulikeblank, travel, suggestmeabook, televisionsuggestions, sysadmin, askcarguys, askdocs, askvet
|
@@ -154,13 +154,14 @@ stack_unix, stack_android, stack_academia, stack_superuser, stack_tex, stack_pho
|
|
154 |
### Data Selection
|
155 |
TODO: check if this section holds for stack
|
156 |
|
157 |
-
|
|
|
158 |
The value of a score is relative; in domains(posts) with more traffic, there will be more higher-scoring posts(comments).
|
159 |
Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure, which is why using timestamp information is essential when inferring preferences.
|
160 |
|
161 |
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
162 |
1. A was written *no later than* B and A has a higher score than B.
|
163 |
-
2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18).
|
164 |
3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
|
165 |
4. The post has a score >= 10 and each comment has a score >= 2 (upvoted at least once).
|
166 |
|
|
|
29 |
If A had been written before B, then we could not conclude this, since its higher score could have been the result of more visibility.
|
30 |
We chose data where the preference label is intended to reflect which response is more *helpful* rather than which is less *harmful*, the latter being the focus of much past work.
|
31 |
|
32 |
+
How is SHP different from [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) and [Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1)?
|
|
|
33 |
|
34 |
| Dataset | Size | Input | Label | Domains | Data Format | Length |
|
35 |
| -------------------- | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------- | --------------- |
|
36 |
| SHP-2 | 4.8M | Naturally occurring human-written responses | Collective Human Preference | 129 (labelled) | Question/Instruction + Response (Single-turn) | up to 10.1K T5 tokens |
|
37 |
| HH-RLHF | 91K | Dialogue with LLM | Individual Human Preference | not labelled | Live Chat (Multi-turn) | up to 1.5K T5 tokens |
|
38 |
+
| OASST | 161K | Dialogue with LLM | K Individual Preferences, Aggregated | not labelled | Live Chat (Multi-Turn) | up to 1.5K T5 tokens |
|
39 |
|
40 |
How is SHP different from other datasets that have scraped Reddit, like [ELI5](https://huggingface.co/datasets/eli5#source-data)?
|
41 |
SHP uses the timestamp information to infer preferences, while ELI5 only provides comments and scores -- the latter are not enough to infer preferences since comments made earlier tend to get higher scores from more visibility.
|
|
|
88 |
}
|
89 |
```
|
90 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
Here's an example from `stackexchange/stack_academia/validation.json`:
|
92 |
```
|
93 |
{
|
|
|
109 |
}
|
110 |
```
|
111 |
|
112 |
+
where the fields are:
|
113 |
+
- ```post_id```: the ID of the Reddit post (string)
|
114 |
+
- ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string)
|
115 |
+
- ```upvote_ratio```: the percent of votes received by the post that were positive (aka upvotes) (float)
|
116 |
+
- ```history```: the post title concatented to the post body (string)
|
117 |
+
- ```c_root_id_A```: the ID of comment A (string)
|
118 |
+
- ```c_root_id_B```: the ID of comment B (string)
|
119 |
+
- ```created_at_utc_A```: utc timestamp of when comment A was created (integer)
|
120 |
+
- ```created_at_utc_B```: utc timestamp of when comment B was created (integer)
|
121 |
+
- ```score_A```: (# positive votes - # negative votes + 1) received by comment A (integer)
|
122 |
+
- ```score_B```: (# positive votes - # negative votes + 1) received by comment B (integer)
|
123 |
+
- ```human_ref_A```: text of comment A (string)
|
124 |
+
- ```human_ref_B```: text of comment B (string)
|
125 |
+
- ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
|
126 |
+
- ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
|
127 |
+
- ```score_ratio```: the ratio of the more preferred comment's score to the less preferred comment's score (will be >= 1) (float)
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
## Dataset Design
|
132 |
|
133 |
### Domain Selection
|
134 |
+
The data is sourced from Reddit and StackExchange, which are both public forums organized into different domains.
|
|
|
|
|
135 |
|
136 |
+
SHP-2 contains a train, validation, and test split for comments scraped from each domain. We chose domains based on:
|
137 |
+
1. whether they were well-known (>= 100K subscribers for Reddit and >= 50K for StackExchange)
|
138 |
2. whether posts were expected to pose a question or instruction
|
139 |
3. whether responses were valued based on how *helpful* they were
|
140 |
4. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
|
141 |
|
142 |
+
The train/validation/test splits were created by splitting the post IDs of a domain in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
|
143 |
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%.
|
144 |
|
145 |
+
See below for a list of all domains:
|
146 |
|
147 |
Reddit: \
|
148 |
techsupport, asklinguistics, askscience, catadvice, campingandhiking, askphysics, espresso, botany, asksocialscience, askbaking, ultralight, legaladvice, hiking, webdev, askengineers, screenwriting, askhistorians, vegetarian, writing, diy, musictheory, camping, moviesuggestions, askeconomics, stocks, frugal, outoftheloop, booksuggestions, gamedev, linuxquestions, asknetsec, aviation, askacademia, asksciencefiction, askhr, explainlikeimfive, etymology, entrepreneur, cooking, puppy101, keto, crochet, smallbusiness, architecture, artfundamentals, sewing, zerowaste, changemyview, mechanicadvice, iwanttolearn, eatcheapandhealthy, askanthropology, askculinary, askphilosophy, tea, running, excel, homebrewing, solotravel, fishing, cookingforbeginners, homeautomation, ifyoulikeblank, travel, suggestmeabook, televisionsuggestions, sysadmin, askcarguys, askdocs, askvet
|
|
|
154 |
### Data Selection
|
155 |
TODO: check if this section holds for stack
|
156 |
|
157 |
+
For Reddit, the score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
158 |
+
For Stackexchange, the score of a post/comment is 0 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
159 |
The value of a score is relative; in domains(posts) with more traffic, there will be more higher-scoring posts(comments).
|
160 |
Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure, which is why using timestamp information is essential when inferring preferences.
|
161 |
|
162 |
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
163 |
1. A was written *no later than* B and A has a higher score than B.
|
164 |
+
2. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18). For Stackexchange,
|
165 |
3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
|
166 |
4. The post has a score >= 10 and each comment has a score >= 2 (upvoted at least once).
|
167 |
|