|
--- |
|
dataset_info: |
|
features: |
|
- name: response_words |
|
dtype: int64 |
|
- name: label |
|
dtype: string |
|
- name: conversations |
|
list: |
|
- name: from |
|
dtype: string |
|
- name: value |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 61858860.826112196 |
|
num_examples: 13302 |
|
download_size: 39125513 |
|
dataset_size: 61858860.826112196 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
I forgot if this dataset is the dirty version of Reddit Writing Prompts or not, it's probably a mix of both. |
|
|
|
The data was filtered and classified using [Lilac](https://www.lilacml.com/) with two embedding models: |
|
- [jinaai/jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) |
|
- [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) |
|
|
|
(Note: Lilac is amazing BTW, and the UI is nice. Highly recommended for data processing tasks) |
|
|
|
The dataset has been converted to ShareGPT format, including word counts for responses and labeled perspectives. While the labeling may not be 100% accurate, ambiguous cases have been labeled separately with their perspectives excluded from the prompts. |
|
|
|
Non-story content has been removed, though some examples may have been missed. Some non-story content was purposefully kept if it was closely related to the prompt (like relevant responses) - it's a bit hard to draw a clear line sometimes. Stories containing unwanted words or sentences were filtered based on personal preferences. Since "slop" is subjective and lacks a standardized definition, you may need to perform additional cleaning before using this dataset for training. |
|
|
|
PS: MY MAIN ACCOUNT IS A MESS AND THE STORAGE IS FULL, SO I CREATED THIS 'ORGANIZATION' TO DUMP MY MODELS AND DATASETS |