Update README.md
Browse files
README.md
CHANGED
@@ -53,11 +53,11 @@ pipe = pipe.to("cuda")
|
|
53 |
|
54 |
# Load the transformer model
|
55 |
prompt_pipe = pipeline("text-generation", model="crumb/bloom-560m-RLHF-SD2-prompter")
|
56 |
-
prompt = "
|
57 |
|
58 |
# Auto-complete prompt
|
59 |
prompt = "<s>Prompt: " + prompt + ","
|
60 |
-
extended_prompt = prompt_pipe(prompt, do_sample=True, max_length=
|
61 |
extended_prompt = extended_prompt[10:]
|
62 |
print("Prompt is now: ", extended_prompt)
|
63 |
|
@@ -67,7 +67,9 @@ image = pipe(extended_prompt).images[0]
|
|
67 |
image.save("output.png")
|
68 |
image
|
69 |
```
|
70 |
-
*Prompt is now:
|
71 |
-
![](
|
|
|
|
|
72 |
|
73 |
Further testing to be done in this area (automated training with aesthetic predicting models, larger data collection about prompt scores, better training in general)
|
|
|
53 |
|
54 |
# Load the transformer model
|
55 |
prompt_pipe = pipeline("text-generation", model="crumb/bloom-560m-RLHF-SD2-prompter")
|
56 |
+
prompt = "cool landscape"
|
57 |
|
58 |
# Auto-complete prompt
|
59 |
prompt = "<s>Prompt: " + prompt + ","
|
60 |
+
extended_prompt = prompt_pipe(prompt, do_sample=True, max_length=42)[0]['generated_text']
|
61 |
extended_prompt = extended_prompt[10:]
|
62 |
print("Prompt is now: ", extended_prompt)
|
63 |
|
|
|
67 |
image.save("output.png")
|
68 |
image
|
69 |
```
|
70 |
+
*Prompt is now: cool landscape, concept art*
|
71 |
+
![](https://cdn.discordapp.com/attachments/1010693530181718146/1047831482808406067/image.png)
|
72 |
+
|
73 |
+
short addition, works though I guess
|
74 |
|
75 |
Further testing to be done in this area (automated training with aesthetic predicting models, larger data collection about prompt scores, better training in general)
|