This LLM seems to be trolling me??
3
#9 opened 8 months ago
by
skynet24
![](https://cdn-avatars.huggingface.co/v1/production/uploads/66694b5c1ab5ccf8f2d3f476/Pl09is8swXuw3AX3oA1xX.png)
Reducing Latency in Locally Hosted model
1
#8 opened 10 months ago
by
anshulchandel
Not working on M1 Max using llama-cpp-python
#7 opened about 1 year ago
by
shroominic
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6320741021bbe5fca0f605d1/rC9eSkdvVCi_ckNBayJWo.jpeg)
Missing tokenizer.model file
3
#6 opened about 1 year ago
by
whatever1983
not working
5
#3 opened over 1 year ago
by
imhsouna
Free and ready to use deepseek-coder-6.7B-instruct-GGUF model as OpenAI API compatible endpoint
#2 opened over 1 year ago
by
limcheekin
This model cannot be used normally
19
#1 opened over 1 year ago
by
hyunfzen