mobicham
mobicham
AI & ML interests
Model pruning, quantization, computer vision, LLMs
Organizations
mobicham's activity
CPU support
1
#2 opened 3 months ago
by
Essa20001
Decensored version?
5
#1 opened 4 months ago
by
KnutJaegersberg
Oobabooga?
1
#1 opened 4 months ago
by
AIGUYCONTENT
the coder from the model card has errors when executing on google colab
1
#1 opened 4 months ago
by
vasilee
QUANTIZED VERSION GGUF
1
#5 opened 4 months ago
by
ar08
GSM8K (5-shot) performance is quite different compared to running lm_eval locally
5
#755 opened 6 months ago
by
mobicham
Details about this model
1
#4 opened 7 months ago
by
at676
Make it usageable for cpu
1
#3 opened 7 months ago
by
ar08
Error with adapter ?
2
#2 opened 8 months ago
by
nelkh
Any plan for making HQQ+ 2bit quant for Mixtral or larger models?
1
#1 opened 8 months ago
by
raincandy-u
New activity in
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-metaoffload-HQQ
8 months ago
Runs out of memory on free tier Google Colab
3
#3 opened 8 months ago
by
sudhir2016
New activity in
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bitgs8-metaoffload-HQQ
8 months ago
Either README is wrong, or the wrong model file is uploaded?
1
#1 opened 8 months ago
by
andysalerno
Quantizations?
2
#1 opened 10 months ago
by
musicurgy
Which Dataset ?
1
#4 opened 9 months ago
by
xxxTEMPESTxxx
How do I run this on cpu?
5
#3 opened 9 months ago
by
ARMcPro
New activity in
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-metaoffload-HQQ
9 months ago
gguf format
1
#2 opened 9 months ago
by
GyroO
Stop overgenerating. Need EOS token?
11
#1 opened 9 months ago
by
vicplus
New activity in
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-metaoffload-HQQ
9 months ago
gsm8k score largely different from local run
6
#591 opened 10 months ago
by
mobicham
Output features are different compared to timm
2
#2 opened 10 months ago
by
mobicham