Kusawa kusawa
sunnykusawa
ยท
AI & ML interests
LLM, Quantization, FineTunning
Organizations
None yet
sunnykusawa's activity
unable to load quantized 4bit_m
3
#5 opened 4 months ago
by
sunnykusawa
unable to load 4-bit quantized varient with llama.cpp
#31 opened 4 months ago
by
sunnykusawa
Input token size issue, does it realy supports 32k tokens?
1
#197 opened 7 months ago
by
sunnykusawa
Input validation error: `inputs` tokens + `max_new_tokens` must be <= 2048. on Mixtral8x7b 32K token
2
#199 opened 7 months ago
by
sunnykusawa
Deploy Quantized model on AWS Sagemaker
#4 opened 8 months ago
by
sunnykusawa