File size: 3,243 Bytes
5eb3703
 
 
 
 
 
 
 
 
 
 
 
f77befc
 
 
5eb3703
 
 
 
43fbd55
5eb3703
 
8be772e
 
9600352
d93029d
d330f54
c6122dc
1e1f4a6
 
9600352
 
7629f67
8be772e
21e24cc
 
8be772e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbf7aa6
8be772e
 
 
dbf7aa6
8be772e
 
 
 
 
 
 
 
 
dbf7aa6
 
 
ce4c267
 
 
 
8be772e
1c491d4
 
bcf9b83
 
1c491d4
8b8be9d
8131375
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
license: bigscience-bloom-rail-1.0
tags:
- stable-diffusion
- diffusion
model-index:
- name: bloom-560m-RLHF-SD2-prompter
  results: []
  
datasets:
 - Gustavosta/Stable-Diffusion-Prompts

widget:
- text: "<s>Prompt: "

inference:
  parameters:
    eos_token_id: 2
    max_length: 128
    do_sample: true
---

# BLOOM-560m RLHF SD2 Prompter 

Using RLHF (Reinforcement Learning from Human Feedback) to finetune [mrm8488/bloom-560m-finetuned-sd-prompts](https://hf.co/mrm8488/bloom-560m-finetuned-sd-prompts) further for SD2.0
```
batch_size = 16
learning_rate = 0.001 # this is why I didn't have to spend _forever_ on it
```

Generate extension with "\<s>Prompt: " and whatever your normal prompt is.

I did this myself. I sat down and just ranked images for so long. It's gone through a couple iterations. Only the biases and layernorm weights were trained. The commit messages are a MESS. **First iteration of this project**

donate so i can do this on real hardware : https://github.com/aicrumb/aicrumb/blob/main/README.md

## Example usage

```python
# Install libraries needed to run the models
!pip install transformers diffusers accelerate -qq

# Import the libraries
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
from transformers import pipeline
import torch

# This is the model that the transformer was finetuned to generate prompts for
model_id = "stabilityai/stable-diffusion-2-base"

# Use the Euler scheduler here
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

# Load the transformer model
prompt_pipe = pipeline("text-generation", model="crumb/bloom-560m-RLHF-SD2-prompter")
prompt = "cool landscape"

# Auto-complete prompt
prompt = "<s>Prompt: " + prompt + ","
extended_prompt = prompt_pipe(prompt, do_sample=True, max_length=42)[0]['generated_text']
extended_prompt = extended_prompt[10:]
print("Prompt is now: ", extended_prompt)

# Generate image
image = pipe(extended_prompt).images[0]  

image.save("output.png")
image
```
*Prompt is now:   cool landscape, concept art*
![](https://cdn.discordapp.com/attachments/1010693530181718146/1047831482808406067/image.png)

*Prompt is now:   cool landscape, concept art, sharp focus, digital painting*
![](https://cdn.discordapp.com/attachments/1010693530181718146/1047832480335536249/image.png)

short additions, they work though I guess (results vary)

It's also very good at generating prompts by itself, with just the "Prompt:" prompt.

*\<s>Prompt: 1 0 th century, highly detailed, concept art, cinematic lighting, unreal engine, trending on artstation, artstation hd, artstation hq, very very detailed*
![](https://cdn.discordapp.com/attachments/1010693530181718146/1047843202050310174/image.png)

Further testing to be done in this area (automated training with aesthetic predicting models, larger data collection about prompt scores, better training in general)

Also, enjoy this graphic I had to make myself because I kept being indecisive of the reward methodology ![](https://cdn.discordapp.com/attachments/1010693530181718146/1047846272096292925/image.png)