Reddit-Writing-SGPT / README.md
Hastagaras's picture
Update README.md
338dff2 verified
|
raw
history blame
1.76 kB
metadata
dataset_info:
  features:
    - name: response_words
      dtype: int64
    - name: label
      dtype: string
    - name: conversations
      list:
        - name: from
          dtype: string
        - name: value
          dtype: string
  splits:
    - name: train
      num_bytes: 61858860.826112196
      num_examples: 13302
  download_size: 39125513
  dataset_size: 61858860.826112196
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

I forgot if this dataset is the dirty version of Reddit Writing Prompts or not, it's probably a mix of both.

The data was filtered and classified using Lilac with two embedding models:

(Note: Lilac is amazing BTW, and the UI is nice. Highly recommended for data processing tasks)

The dataset has been converted to ShareGPT format, including word counts for responses and labeled perspectives. While the labeling may not be 100% accurate, ambiguous cases have been labeled separately with their perspectives excluded from the prompts.

Non-story content has been removed, though some examples may have been missed. Some non-story content was purposefully kept if it was closely related to the prompt (like relevant responses) - it's a bit hard to draw a clear line sometimes. Stories containing unwanted words or sentences were filtered based on personal preferences. Since "slop" is subjective and lacks a standardized definition, you may need to perform additional cleaning before using this dataset for training.

PS: MY MAIN ACCOUNT IS A MESS AND THE STORAGE IS FULL, SO I CREATED THIS 'ORGANIZATION' TO DUMP MY REPOS AND DATASETS