Hastagaras
commited on
Commit
•
dbf2544
1
Parent(s):
1eddef3
Update README.md
Browse files
README.md
CHANGED
@@ -23,3 +23,14 @@ configs:
|
|
23 |
- split: train
|
24 |
path: data/train-*
|
25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
- split: train
|
24 |
path: data/train-*
|
25 |
---
|
26 |
+
I forgot if this dataset is the dirty version of Reddit Writing Prompts or not, it's probably a mix of both.
|
27 |
+
|
28 |
+
The data was filtered and classified using [Lilac](https://www.lilacml.com/) with two embedding models:
|
29 |
+
- [jinaai/jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en)
|
30 |
+
- [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)
|
31 |
+
|
32 |
+
(Note: Lilac is amazing BTW, and the UI is nice. Highly recommended for data processing tasks)
|
33 |
+
|
34 |
+
The dataset has been converted to ShareGPT format, including word counts for responses and labeled perspectives. While the labeling may not be 100% accurate, ambiguous cases have been labeled separately with their perspectives excluded from the prompts.
|
35 |
+
|
36 |
+
Non-story content has been removed, though some examples may have been missed. Some non-story content was purposefully kept if it was closely related to the prompt (like relevant responses) - it's a bit hard to draw a clear line sometimes. Stories containing unwanted words or sentences were filtered based on personal preferences. Since "slop" is subjective and lacks a standardized definition, you may need to perform additional cleaning before using this dataset for training.
|