Sean
Jellon
AI & ML interests
None yet
Recent Activity
published
a model
2 days ago
Jellon/Mistral-Small-24B-Instruct-2501-exl2-4bpw
updated
a model
2 days ago
Jellon/Mistral-Small-24B-Instruct-2501-exl2-4bpw
published
a model
2 days ago
Jellon/Mistral-Small-24B-Instruct-2501-exl2-6bpw
Organizations
None yet
Jellon's activity
32B Version?
1
#4 opened about 1 month ago
by
Leorithan
Where is the base model?
3
#4 opened 3 months ago
by
Jellon
safetensors
1
#1 opened 3 months ago
by
Jellon
The Quant gets broken when model goes above 19000-19400 context in Chat.
1
#1 opened 3 months ago
by
Panzer333
[Thanks]
1
#1 opened 3 months ago
by
Darkknight535
[Thanks]
1
#1 opened 3 months ago
by
Darkknight535
Good stuff!
1
#2 opened 4 months ago
by
Razrien
[Urgent Feedback]
5
#1 opened 4 months ago
by
Darkknight535
[FeedBack]
17
#1 opened 4 months ago
by
Darkknight535
[Feedback]
4
#1 opened 4 months ago
by
Darkknight535
I think this is a promising model, but it deteriorates into broken english quite fast
15
#1 opened 4 months ago
by
Jellon
Error during inference
7
#1 opened 4 months ago
by
Jellon
Formatting errors, sends a long stream of markdown
6
#2 opened 5 months ago
by
DocStrangeLoop
Just wanted to say thanks for the exl2
#1 opened 5 months ago
by
Jellon