File size: 3,072 Bytes
d458fb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b3ac1d
d458fb1
0b3ac1d
d458fb1
 
0b3ac1d
d458fb1
 
0b3ac1d
d458fb1
 
 
 
 
b66fb6b
709a5c4
 
d458fb1
f76333f
fab03cb
d458fb1
 
0b3ac1d
 
 
 
 
d458fb1
de80314
d458fb1
cc40b5e
d458fb1
 
 
0b3ac1d
d458fb1
 
 
 
 
 
 
 
 
 
0b3ac1d
d458fb1
 
0b3ac1d
d458fb1
 
 
0b3ac1d
 
d458fb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b3ac1d
d458fb1
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: cc-by-nc-4.0
datasets:
- AdamCodd/Civitai-8m-prompts
metrics:
- rouge
base_model: t5-small
model-index:
- name: t5-small-negative-prompt-generator
  results:
  - task:
      type: text-generation
      name: Text Generation
    metrics:
    - type: loss
      value: 0.14079
    - type: rouge-1
      value: 68.7527
      name: Validation ROUGE-1
    - type: rouge-2
      value: 53.8612
      name: Validation ROUGE-2
    - type: rouge-l
      value: 67.3497
      name: Validation ROUGE-L
widget:
- text: masterpiece, 1girl, looking at viewer, sitting, tea, table, garden
  example_title: Prompt
pipeline_tag: text2text-generation
inference: false
tags:
- art
---
## t5-small-negative-prompt-generator
This model [t5-small](https://huggingface.co/google-t5/t5-small) has been finetuned on a subset of the [AdamCodd/Civitai-8m-prompts](https://huggingface.co/datasets/AdamCodd/Civitai-8m-prompts) dataset (~800K prompts) focused on the top 10% prompts according to Civitai's positive engagement ("stats" field in the dataset).

It achieves the following results on the evaluation set:
* Loss: 0.14079
* Rouge1: 68.7527
* Rouge2: 53.8612
* Rougel: 67.3497
* Rougelsum: 67.3552

The idea behind this is to automatically generate negative prompts that improve the end result according to the positive prompt input. I believe it could be useful to display suggestions for new users who use stable-diffusion or similar.

The license is **cc-by-nc-4.0**. For commercial use rights, please contact me (adamcoddml@gmail.com).

## Usage

The length of the negative prompt is adjustable with the `max_new_tokens` parameter. The `repetition_penalty` and `no_repeat_ngram_size` are both needed as it'll start to repeat itself very quickly without it. You can use `temperature` and `top_k` to improve the creativity of the outputs. 

```python
from transformers import pipeline

text2text_generator = pipeline("text2text-generation", model="AdamCodd/t5-small-negative-prompt-generator")

generated_text = text2text_generator(
    "masterpiece, 1girl, looking at viewer, sitting, tea, table, garden",
    max_new_tokens=50,
    repetition_penalty=1.2,
    no_repeat_ngram_size=2
)
print(generated_text)
# [{'generated_text': '(worst quality, low quality:1.4), EasyNegative'}]
```
This model has been trained exclusively on stable-diffusion prompts (SD1.4, SD1.5, SD2.1, SDXL...) so it might not work as well on non-stable-diffusion models.

NB: The dataset includes negative embeddings, so they're present in the output as you can see.

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- Mixed precision
- num_epochs: 2
- weight_decay: 0.01

### Framework versions

- Transformers 4.36.2
- Datasets 2.16.1
- Tokenizers 0.15.0
- Evaluate 0.4.1

If you want to support me, you can [here](https://ko-fi.com/adamcodd).