Could you create a GGUF-version of this base-model?
This base-model training seems to be finished? So could you create a GGUF-version of it?
This base-model training seems to be finished? So could you create a GGUF-version of it?
You can try to create gguf by yourself because support for Viking pre-tokenizer was added to llama.cpp:
This base-model training seems to be finished? So could you create a GGUF-version of it?
Created main quants for Viking 13b and Viking 7b. You can find them at my HF page.
Great Nikolay! Will you make them available through Ollama at some point? I’m hoping they update their llama.cpp soon so we can use Viking models with it.
Great Nikolay! Will you make them available through Ollama at some point? I’m hoping they update their llama.cpp soon so we can use Viking models with it.
I tried to upload Viking 7b to ollama's online database but encountered this unfixed issue: https://github.com/ollama/ollama/issues/2155