Spaces:
Running
Seed control
My question is: would it be possible to see the seed used when generating the image? or even be able to define it before generating the image.
I am interested in this issue because, although I try to save some interesting images, sometimes what I want is to use any image to modify it and I can only do that if I have the seed in addition to the prompt.
(I hope I have explained myself correctly.)
Greetings and congratulations for the excellent work.
Thanks! Probably there's a way, because they're just diffusers and they have seeds implemented, see:
https://huggingface.co/docs/diffusers/using-diffusers/reusing_seeds
The idea would be to allow the user to specify a seed, and if none is specified, use a random one. However, none of the spaces that rely on the inference end points have supported seeds, so the code to activate them, or to show the one used to generate an image, remains unknown. The spaces that allow seed control use the CPU based pipeline, which slows everything down to 15 minutes/image, and the GPU powered ones are paid by their creators.
It remains in a wishlist, along with Negative Prompts (which should also be possible) for someone that knows how to code! I just copied and posted Omnibus' code for the Maximum Multiplier and posed for the photo! XD
As soon as I see a single example of working code for seed control for the inference API I'll implement it in this space ASAP.
Thanks for your response!!!