City
city96
AI & ML interests
LLMs, Diffuser models, Anime
Organizations
None yet
city96's activity
Q16
1
#39 opened 8 days ago
by
noelsalazar
InvokeAI support ?
2
#7 opened 15 days ago
by
HuggingAny
Q4_0, Q4_1, Q5_0, Q5_1 can be dropped?
1
#1 opened 14 days ago
by
CHNtentes
SD3.5 medium pls
1
#1 opened 21 days ago
by
CassyPrivate
it'd be awesome if nf4 was provided
1
#2 opened 23 days ago
by
MayensGuds
how many steps needed ?
1
#3 opened 22 days ago
by
pikkaa
Not able to download it
1
#38 opened 26 days ago
by
iffishells
need fp8 for speed
3
#1 opened 27 days ago
by
Ai11Ali
Does it work with gguf of t5xxl ?
5
#2 opened 27 days ago
by
razvanab
Create an equivalent to GGUF for Diffusers models?
1
#37 opened 26 days ago
by
julien-c
which vae file should be used with this gguf?
6
#2 opened 27 days ago
by
slacktahr
The load dual clip GGUF can use this encoder, what what about clip_i?
2
#9 opened about 1 month ago
by
witchercher
Forge support and updated convert script.
3
#1 opened about 1 month ago
by
city96
llama cpp
5
#31 opened 3 months ago
by
goodasdgood
Is it using ggml to compute?
1
#30 opened 3 months ago
by
CHNtentes
How to use the model?
1
#8 opened 3 months ago
by
AIer0107
FLUX GGUF conversion
1
#29 opened 3 months ago
by
bsingh1324
code for use this quantized model
3
#18 opened 3 months ago
by
Mahdimohseni0333
cant see the Q5_K_M gguf quant
2
#28 opened 3 months ago
by
Ai11Ali
Method for quantizing and converting FluxDev to GGUF?
6
#27 opened 3 months ago
by
Melyn