gguf?
#3
by
wukongai
- opened
great work!
should release the gguf format model?
llama.cpp supports Qwen1.5, but not yet Qwen1.5-MoE.
There is a pull request that has not yet been applied: https://github.com/ggerganov/llama.cpp/pull/6074
Closed as the PR has been merged. Give it a try!
jklj077
changed discussion status to
closed