Update all_models.py

#3
by digiplay - opened
No description provided.

πŸ₯ΉπŸ₯²πŸ₯²πŸ₯²I am sorry ignore this again, I am sleepy now 😭πŸ₯΄πŸ˜΅β€πŸ’«πŸ˜‚πŸ˜‚

I just want to paste some model name for test πŸ˜­πŸ˜…πŸ˜‚

Haha, no problem digiplay, gotta love your accidental edits!! πŸ€£πŸ€£πŸ˜‰πŸ€

Yntec changed pull request status to closed

by the way, diffusers 0.28.0.dev0 is released πŸ’•
we can modify to this news version,

Although now, sd-to-diffusers convert space update to 27 version
but it is very slow and cannot show picture normally,
So I decided edited it to 28 version,
is fast !πŸ’•πŸ’–πŸ€—

I guess, we can get rid all of unkown problems now, haha

(many 22dev version Models I still haven't checked them now,
I am very grateful that you said you can help me,
but its too many models I'm very shyness to bother you πŸ₯ΉπŸ˜³πŸ«£

I just update 2-3 22dev models (often see unknow errors and don't know why )
and takes me many times ,then My eyes feel sore ...πŸ˜΅β€πŸ’«πŸ₯΄

recently I using CHATGPT teach me programming
and I want to copy some model to test my App.. but I am too sleepy

So really really Sorry again.. please forgive me...πŸ₯ΉπŸ₯²πŸ˜­

Screenshot_20240602_045913_Vivaldi Snapshot.jpg

image (12) (17).webp

Thanks digiplay, if you use one of your models through a space like this one, instead of "unknown error", you get to see the actual error. The GPU where the model is hosted in runs out of memory, it looks like this:

"Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.75 GiB total capacity; 1.86 GiB already allocated; 11.06 MiB free; 1.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.75 GiB total capacity; 1.86 GiB already allocated; 11.06 MiB free; 1.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']"

That's it, from what I can see these errors have nothing to do with the diffuser version and it's just a matter of luck, some GPU never seem to run out of memory so the models run fine! When it works, it works and I've seen even images produced in 8.5 seconds! Though they're 768x768 ones, I bet the 512x512 are even faster.

Those errors fix themselves after a while whether you edit the model or not, though for my models they seem to affect more recent models, and the older the model, the better the chances they work properly. The recent models I've uploaded have been timing out for almost an hour the day i release them, days later the problems don't seem as severe.

But it's not just you, it's happening to all the models at huggingface! The worst are the Internal Server Errors, because they happen AFTER your image has been generated, it just wasn't shown, if you request it again it appears immediately, but people not knowing it and moving on probably just lose the image, which bugs me.

That's why I prefer using models from a space like this instead of the model page, it's very cool to see the actual errors and knowing what to do. If it's out of memory, there's nothing you can do but wait, they will work again even if you don't touch their diffusers version.

Haha, no problem digiplay, gotta love your accidental edits!! πŸ€£πŸ€£πŸ˜‰πŸ€

Ohhhhhhh I just seeing this now πŸ˜­πŸ˜­πŸ˜­πŸ˜­πŸ˜­πŸ˜­πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–

Thanks for your kind πŸ’–πŸ˜Άβ€πŸŒ«οΈπŸ˜Άβ€πŸŒ«οΈ

Professional Professor @Yntec , do you know how to stop generated image if over 90 seconds and output results anyway in codes?

This Question I ask chargpt 3.5, it always give me wrong Answers,
(but it give me good advices let me can use my mother languageπŸ‘‰ auto transfer my prompt to English and generating images , i am working on it now, hope I can share to you soon :) :)

PS. you are right , I can see many 404 or 500 or 503 502 errors , I am studying it,
hope I will figure it out πŸ˜‚πŸ˜‚

do you know how to stop generated image if over 90 seconds and output results anyway in codes?

Once an image is requested there's no way to stop it, all my "stop" button does is stopping listening for one so you can send a different request, but if you send the same request again that you stopped, and it's cached, it'll appear instantly. Some images may take up to 400 seconds to generate, I'm afraid people would just leave and never see them, they'd rather switch models and get 30 other images in that time, it's like, images themselves have lost value.

my mother language

It would be hilarious if is was Spanish as Spanish is my mother language.

PS. you are right , I can see many 404 or 500 or 503 502 errors , I am studying it,

This is huggingface's fault, it happens regularly on the AUTOMATIC1111 UI that I use to mix models. It happens on all spaces of any kind and not just for image generation. But hey, we get what we pay for! XD Also, I just keep merging models to give something back, huggingface provides the best free service on the net, by far.

My solution: create multiple versions of models, if one is out of memory, use the other one. We already did this for the AbsoluteReality 1.8.1 and MGM models, unintentionally, ha!

The worst are the Internal Server Errors, because they happen AFTER your image has been generated, it just wasn't shown, if you request it again it appears immediately, but people not knowing it and moving on probably just lose the image, which bugs me.

I was wrong. The WORST is when THAT happens, but it doesn't know that happened, so instead of throwing an error it outputs a black square identical to the false positive ones. The only way to tell them apart is that false positives get cached, so as you send the prompt again, a black square will come up immediately, here, it'll show the image. Except that, since it has run out of memory the new image will also output another black square, so the difference is the time it takes to show it! And it seems the only way around this is to convert the whole model to diffusers again so it gets a different GPU. It'll continue throwing black boxes until getting free memory again, but at least it'll know so it'll also throw errors and one can send the same prompt after a black box and get an image.

Phew! Weirdly enough this only happens to some of the models I've uploaded, like m0nst3rfy3, DucHaiten-FANCYxFANCY and Dreamworks, they may have been unlucky.

I'm now just rambling but the "get a black screen, send same prompt again" is a novel solution, specially as your image was generated but not shown.

Yntec/m0nst3rfy3 is very cute model ❀️❀️❀️❀️❀️
I guess it has a bug in this model, look this:

Screenshot_20240615_065302_Vivaldi Snapshot.jpg

Screenshot_20240615_065331_Vivaldi Snapshot.jpg

if you use "Cute" rabbit, it shows black square ofently,
if you use "Cut" rabbit, it shows very cute rabbitπŸ˜‚πŸ˜‚ strange bug? I just found πŸ˜†πŸ˜†

you can try other prompt ? not using "Cute" word.. because this model is cute already ? πŸ’―β˜ΊοΈ

πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–

many "cut" rabbits for you... πŸ˜‰

19fcb0e5-64d8-47c9-9234-3aac333f3a4b.jpeg
500ab265-e190-4d0e-8419-58e60f92b933.jpeg
1947d69d-199e-4f4a-bcc8-135992a7b204.jpeg

PS. some of models I uploaded... too many black square ...I give up to fix them temporary haha ... I guess DucHaiten models have another bugs , maybe many bugs?

I used to uploaded:
digiplay/DucHaiten-Real3D-NSFW-V1

but generated images looks not real, far away from his DEMO images πŸ₯΄

I think some people use it well in AUTOMATIC1111 ,
but with huggingface's API... I got many bad results .. πŸ˜…

Thanks digiplay, haha, these are great! I didn't know m0nst3rfy3 could make outputs like that with very short prompts! I have added them to the model's page.

too many black square

Have you tried switching the VAE and seeing if the problem persists? Bake a different VAE and convert to diffusers again, it could be a VAE bug and that's why it only affects some models.

but generated images looks not real, far away from his DEMO images

I haven't tested that model but some authors just make models that rely heavily on negative prompts and post processing to look like they do on their demo images, but when you try them you may even get eyes of different colors and other things that ruin to outputs! I have rejected a lot of models with awesome demo images from civitai for that reason, but continue to merge models and make remixes to fix those issues, I think my DreamWorksRemix was a severe example as I found DreamWorksDiffusion completely unusable (that's why I never uploaded it here) but merging it with DreamWorks created something better than both of them, though I can imagine the face of users when it doesn't output any DreamWorks's character and it's just the style! 🀣

Huggingface's API just gives you the raw output, the real world, and it always outputs for me better stuff than AUTOMATIC1111 (without negative prompts or postprocesing) which is great because when I make a model in it I can be sure that on Huggingface's API it'll look even better!

Sign up or log in to comment