Neo Dim
NeoDim
AI & ML interests
None yet
Recent Activity
liked
a model
1 day ago
smirki/UIGEN-T1.1-Qwen-14B-GGUF
liked
a model
9 days ago
agentica-org/DeepScaleR-1.5B-Preview
liked
a model
9 days ago
bartowski/agentica-org_DeepScaleR-1.5B-Preview-GGUF
Organizations
None yet
NeoDim's activity
What is the prompt format?
13
#1 opened 11 months ago
by
siddhesh22
how did you convert `transformers.PreTrainedTokenizer` to ggml format?
1
#2 opened over 1 year ago
by
keunwoochoi
demo space
2
#4 opened over 1 year ago
by
matthoffner

Looks like the starchat-alpha-ggml-q4_1.bin is broken
8
#3 opened over 1 year ago
by
xhyi

missing tok_embeddings.weight error when trying to run with llama.cpp
2
#1 opened over 1 year ago
by
ultra2mh
Cannot run on llama.cpp and koboldcpp
3
#1 opened almost 2 years ago
by
FenixInDarkSolo

Which inference repo is this quantized for?
3
#2 opened over 1 year ago
by
xhyi

Can the quantized model be loaded in gpu to have faster inference ?
6
#1 opened almost 2 years ago
by
MohamedRashad

Can the quantized model be loaded in gpu to have faster inference ?
6
#1 opened almost 2 years ago
by
MohamedRashad

Cannot run on llama.cpp and koboldcpp
3
#1 opened almost 2 years ago
by
FenixInDarkSolo

Can the quantized model be loaded in gpu to have faster inference ?
6
#1 opened almost 2 years ago
by
MohamedRashad
