Robert Sinclair
ZeroWw
AI & ML interests
LLMs optimization (model quantization and back-end optimizations) so that LLMs can run on computers of people with both kidneys.
Organizations
ZeroWw's activity
Alternate quantizations.
15
#1 opened 3 days ago
by
ZeroWw
Alternative quantizations.
6
#1 opened about 2 hours ago
by
ZeroWw
Alternate quantizations
2
#2 opened 6 days ago
by
ZeroWw
Alternate quantizations
2
#1 opened 6 days ago
by
ZeroWw
Alternative quantizations.
#3 opened about 3 hours ago
by
ZeroWw
Please release also a LARGE model.
#14 opened about 5 hours ago
by
ZeroWw
Hmm quite useless... but it works.
6
#4 opened 4 days ago
by
Zibri
Alternative quantizations.
#4 opened about 17 hours ago
by
ZeroWw
Not uncensored.
#3 opened about 17 hours ago
by
ZeroWw
Try this quantizations, they are way better.
#19 opened about 21 hours ago
by
ZeroWw
Alternate quantizations.
#25 opened 1 day ago
by
ZeroWw
Alternate quantizations.
#3 opened 1 day ago
by
ZeroWw
Please support this method:
7
#96 opened 4 days ago
by
ZeroWw
Alternative quantizatioons.
1
#7 opened 3 days ago
by
ZeroWw
Alternate quantizations.
1
#6 opened 2 days ago
by
ZeroWw
not work
1
#12 opened 3 days ago
by
sdyy
Alternative quantizations.
#13 opened 3 days ago
by
ZeroWw
faster
1
#1 opened 3 days ago
by
aifeifei798
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6640d92825d363a5c3c8cfb5/L-zp1DMPBI1rhlFKOIR7U.png)
Testing experimental quants
31
#2 opened 14 days ago
by
bartowski
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6435718aaaef013d1aec3b8b/XKf-8MA47tjVAM6SCX0MP.jpeg)
The real problem of XTTSv2
#12 opened 4 days ago
by
ZeroWw
Alternate quantizations.
1
#2 opened 4 days ago
by
ZeroWw
Alternate quantizations.
#2 opened 4 days ago
by
ZeroWw
Alternate quantizations.
#2 opened 4 days ago
by
ZeroWw
An alternate quantization.
1
#4 opened 5 days ago
by
ZeroWw
Could you please do the same for Mistral 7B Instruct?
#27 opened 6 days ago
by
ZeroWw
Quantization suggestion
29
#3 opened 18 days ago
by
ZeroWw
Alternate quantizations.
#4 opened 6 days ago
by
ZeroWw
My alternate quantizations.
1
#16 opened 7 days ago
by
ZeroWw
Create GGUF for this please
8
#2 opened 4 months ago
by
ishanparihar
Some different quantizations.
#8 opened 7 days ago
by
ZeroWw
Please post f16 quantization.
8
#1 opened about 1 month ago
by
ZeroWw
Check out an alternate quantization...
7
#7 opened 10 days ago
by
ZeroWw
Please check these quantizations.
4
#40 opened 13 days ago
by
ZeroWw
Try my quantizations...
#3 opened 11 days ago
by
ZeroWw
How did you convert it?
5
#1 opened 15 days ago
by
ZeroWw
lots of INS....
2
#1 opened 14 days ago
by
ZeroWw
Dont download, google scuttled this model
10
#77 opened 3 months ago
by
Tom-Neverwinter
Can't find a way to make it work with llama.cpp
#102 opened 16 days ago
by
ZeroWw
What do you think about this method (or derivatives)?
#15 opened 16 days ago
by
ZeroWw
Please do v03 !
1
#3 opened about 1 month ago
by
ZeroWw
Don't forget to post here your models trained with this dataset!
#1 opened about 1 month ago
by
ZeroWw
Please upload f16 too.
#4 opened about 1 month ago
by
ZeroWw
--leave-output-tensor !
2
#13 opened about 2 months ago
by
ZeroWw
colab notebook.
#10 opened about 1 month ago
by
ZeroWw
New activity in
NikolayKozloff/Meta-Llama-3-8B-Instruct-bf16-correct-pre-tokenizer-and-EOS-token-Q8_0-Q6_k-Q4_K_M-GGUF
about 2 months ago
what tokenizer did you use?
#1 opened about 2 months ago
by
ZeroWw