Orimo W
imoc
AI & ML interests
None yet
Recent Activity
liked
a model
3 days ago
NousResearch/DeepHermes-3-Llama-3-8B-Preview
liked
a model
4 days ago
agentica-org/DeepScaleR-1.5B-Preview
liked
a model
5 days ago
deepseek-ai/Janus-Pro-7B
Organizations
None yet
imoc's activity
Hmmmmm still weird refusal, as QwQ
#5 opened about 1 month ago
by
imoc
Minimal GPU requirements
3
#3 opened 4 months ago
by
tmk12
Which one?
1
#1 opened about 2 months ago
by
imoc
too big to run
3
#320 opened 3 months ago
by
karan963
Why FP32?
2
#10 opened about 2 months ago
by
imoc
The training data is not in ChatML format and it won't stop correctly.
3
#3 opened 2 months ago
by
imoc
Difference between this and the other (100 steps) model?
7
#1 opened 6 months ago
by
lemon07r
Nice work!
7
#1 opened 5 months ago
by
DeFactOfficial
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/RhP7T-AlAn5Y-08gaCUcU.jpeg)
32 B coding model please
3
#4 opened 3 months ago
by
gopi87
vllm reply garbled
3
#29 opened 3 months ago
by
SongXiaoMao
![](https://cdn-avatars.huggingface.co/v1/production/uploads/646315dd77f63463fc622588/dUw1xIWuiW_j-_HkiY-9V.jpeg)
This is way too much... USB? Yes. U SB.
3
#21 opened 3 months ago
by
imoc
Very good 7B, good job
#1 opened 3 months ago
by
imoc
Adds Chinese characters to responses
7
#16 opened 3 months ago
by
maxbenk
Nice name QAQ. I'll later upload a 4.7bpw quantized model if it works.
#12 opened 3 months ago
by
imoc
Nice work. Best 32B model(quantized to 4.7bpw) so far, more people should try.
1
#1 opened 3 months ago
by
imoc
Does not work. The model size is wrong too. 1.5B x 5.5BPW should be ~1.6GB
4
#1 opened 3 months ago
by
imoc
What's the prompt template though? It doesn't like to stop😂
2
#4 opened 10 months ago
by
imoc
WC models are heavily biased towards old standards/code samples
3
#8 opened about 1 year ago
by
FizzyFolk