AdamCodd commited on
Commit
d458fb1
1 Parent(s): 14b37ab

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ datasets:
4
+ - AdamCodd/Civitai-8m-prompts
5
+ metrics:
6
+ - rouge
7
+ base_model: t5-small
8
+ model-index:
9
+ - name: t5-small-negative-prompt-generator
10
+ results:
11
+ - task:
12
+ type: text-generation
13
+ name: Text Generation
14
+ metrics:
15
+ - type: loss
16
+ value: 0.173
17
+ - type: rouge-1
18
+ value: 63.86
19
+ name: Validation ROUGE-1
20
+ - type: rouge-2
21
+ value: 47.5195
22
+ name: Validation ROUGE-2
23
+ - type: rouge-l
24
+ value: 62.0977
25
+ name: Validation ROUGE-L
26
+ widget:
27
+ - text: masterpiece, 1girl, looking at viewer, sitting, tea, table, garden
28
+ example_title: Prompt
29
+ pipeline_tag: text2text-generation
30
+ ---
31
+ ## T5-small-negative-prompt-generator
32
+ This model has been trained on a subset of the [AdamCodd/Civitai-8m-prompts](https://huggingface.co/datasets/AdamCodd/Civitai-8m-prompts) dataset (~800K prompts) focused on the top 10% prompts according to Civitai's positive engagement ("stats" field in the dataset). It includes negative embeddings (and thus will output them).
33
+
34
+ It achieves the following results on the evaluation set:
35
+ * Loss: 0.1730
36
+ * Rouge1: 63.8600
37
+ * Rouge2: 47.5195
38
+ * Rougel: 62.0977
39
+ * Rougelsum: 62.1006
40
+
41
+ The idea behind this is to automatically generate negative prompts that improve the end result according to the positive prompt input. This is still in its experimental stage, but it's already quite good. I believe it could be useful as a recommendation for new users who use stable-diffusion or similar.
42
+
43
+ The license is **cc-by-nc-4.0**. For commercial use rights, please [contact me](https://discord.com/users/859202914400075798).
44
+
45
+ ## Usage
46
+
47
+ The length of the negative prompt is adjustable with the `max_new_tokens` parameter. Keep in mind that you'll need to adjust the samplers slightly to avoid repetition and improve the quality of the output.
48
+
49
+ ```python
50
+ from transformers import pipeline
51
+
52
+ text2text_generator = pipeline("text2text-generation", model="AdamCodd/t5-small-negative-prompt-generator")
53
+
54
+ generated_text = text2text_generator(
55
+ "masterpiece, 1girl, looking at viewer, sitting, tea, table, garden",
56
+ do_sample=True,
57
+ max_new_tokens=50,
58
+ repetition_penalty=1.2,
59
+ no_repeat_ngram_size=2,
60
+ temperature=0.9,
61
+ top_p=0.92
62
+ )
63
+
64
+ print(generated_text)
65
+ # [{'generated_text': 'easynegative, badhandv4, (worst quality:2), (low quality lowres:1), blurry, text'}]
66
+ ```
67
+ This model has been trained exclusively on stable-diffusion prompts (SD1.4, SD1.5, SD2.1, SDXL...) so it might not work as well on non-stable-diffusion models.
68
+
69
+ ## Training and evaluation data
70
+
71
+ More information needed
72
+
73
+ ## Training procedure
74
+
75
+ ### Training hyperparameters
76
+
77
+ The following hyperparameters were used during training:
78
+ - learning_rate: 3e-05
79
+ - train_batch_size: 16
80
+ - eval_batch_size: 32
81
+ - seed: 42
82
+ - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
83
+ - Mixed precision
84
+ - num_epochs: 1
85
+ - weight_decay: 0.01
86
+
87
+ ### Framework versions
88
+
89
+ - Transformers 4.36.2
90
+ - Datasets 2.16.1
91
+ - Tokenizers 0.15.0
92
+ - Evaluate 0.4.1
93
+
94
+ If you want to support me, you can [here](https://ko-fi.com/adamcodd).