AdamCodd's picture
Update README.md
95e1cc8 verified
|
raw
history blame
3.06 kB
metadata
license: cc-by-nc-4.0
datasets:
  - AdamCodd/Civitai-8m-prompts
metrics:
  - rouge
base_model: t5-small
model-index:
  - name: t5-small-negative-prompt-generator
    results:
      - task:
          type: text-generation
          name: Text Generation
        metrics:
          - type: loss
            value: 0.173
          - type: rouge-1
            value: 63.86
            name: Validation ROUGE-1
          - type: rouge-2
            value: 47.5195
            name: Validation ROUGE-2
          - type: rouge-l
            value: 62.0977
            name: Validation ROUGE-L
widget:
  - text: masterpiece, 1girl, looking at viewer, sitting, tea, table, garden
    example_title: Prompt
pipeline_tag: text2text-generation
inference: false

t5-small-negative-prompt-generator

This model t5-small has been finetuned on a subset of the AdamCodd/Civitai-8m-prompts dataset (~800K prompts) focused on the top 10% prompts according to Civitai's positive engagement ("stats" field in the dataset). The dataset includes negative embeddings and thus will output them.

It achieves the following results on the evaluation set:

  • Loss: 0.1730
  • Rouge1: 63.8600
  • Rouge2: 47.5195
  • Rougel: 62.0977
  • Rougelsum: 62.1006

The idea behind this is to automatically generate negative prompts that improve the end result according to the positive prompt input. I believe it could be useful to display suggestions for new users who use stable-diffusion or similar.

The license is cc-by-nc-4.0. For commercial use rights, please contact me.

Usage

The length of the negative prompt is adjustable with the max_new_tokens parameter. Keep in mind that you'll need to adjust the samplers slightly to avoid repetition and improve the quality of the output.

from transformers import pipeline

text2text_generator = pipeline("text2text-generation", model="AdamCodd/t5-small-negative-prompt-generator")

generated_text = text2text_generator(
    "masterpiece, 1girl, looking at viewer, sitting, tea, table, garden",
    do_sample=True,
    max_new_tokens=50,
    repetition_penalty=1.2,
    no_repeat_ngram_size=2,
    temperature=0.9,
    top_p=0.92
)

print(generated_text)
# [{'generated_text': 'easynegative, badhandv4, (worst quality:2), (low quality lowres:1), blurry, text'}]

This model has been trained exclusively on stable-diffusion prompts (SD1.4, SD1.5, SD2.1, SDXL...) so it might not work as well on non-stable-diffusion models.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
  • Mixed precision
  • num_epochs: 1
  • weight_decay: 0.01

Framework versions

  • Transformers 4.36.2
  • Datasets 2.16.1
  • Tokenizers 0.15.0
  • Evaluate 0.4.1

If you want to support me, you can here.