Question

#275
by Jt70047 - opened

New to the forum can someone tell me how to get the same character across different sessions ?

The standard suggestion is to use a prompt with a known person which the generator will then have plenty of references of to draw from.
If you use a popular actor from the movies or a historical figure for instance.

So, consider who you want to play the role of your character and have them star in your prompt....er... comic book.

So no repeating an generated original character?

If your original character is famous, or the model itself is trained on that character data then the ones and zeros predicted will use that pattern.
Give it a year or so and the answer will likely be "most definitely yes" but for now I'd say it's a fairly firm, "No, but..."

Perhaps you can share some data related to an original character and we can go from there.
And that's the crux of the matter... a shared character will not likely be unique to you (for long) unless you adjust the appliation to use and be trained on personal data only available to you (that specific original character/data).

This is for instance how we can get similar styles with the current dropdown menu... the data used for training specifically is constrained to use specific styles.
That's why when we use "Japanese" we get a specific result that we won't get as with "3D Render" or any of the other styles.
They contain specific data that provides the parameters for pattern prediction which then results in the results we see.

We would have to know more about the original character under consideration to suggest much more.

Certainly a goal of the process would be to get that level of consistency and to get to that we'll likely need to have a means to supply that localized/custom/original data as can be seen in applications/spaces like: https://huggingface.co/spaces/hardon-server/space-diffusion-img2img-1

What these models do is constrain the output to give weight to specific patterns in data, such as a specific style of an original character.

In my estimation two models that would be even more important that supplying an image would be those that allow inpainting and sketching.
With that we would gain the ability to suggest specific areas of a panel that should be refined or replace (redrawn).
Those areas in turn could then be pointed to a reference image so that the process would know that that area should be filled with a variation on "our original input/character".

Example of Sketch to Image: https://huggingface.co/spaces/TencentARC/T2I-Adapter-SDXL-Sketch
(There is one out there that allows for erasing as well as drawing that is really nice and that represents where things are heading)

But this will not happen over night or even soon.
What you could do of course is take the output from AI Comic Factory and bring it into one of those other models (such as one that allows inpainting/replacement) and feed it your original data constraints).

Then, of course, come back here and tell us how you did it! :)

  • Rodney

(Thank you btw @o0Rodney0o for taking the time to answer to community questions!)

Thanks so much I appreciate your help

Sign up or log in to comment