Text Generation
Transformers
PyTorch
Safetensors
English
gpt_refact
code
custom_code
Eval Results

gguf format?

#4
by stormchaser - opened

hi, can you please provide a gguf format, me and many others have code setup already for various things that consume models using llama.cpp (i love it). it will be faster for me to get started with your model if gguf was available. thanks.

so no one here for any "discussion"?

Small Magellanic Cloud AI org

Some parts of the model are not supported by llama.cpp by now, but I guess it's resolvable, see the issues
https://github.com/ggerganov/llama.cpp/issues/3061
https://github.com/smallcloudai/refact/issues/77

Please, feel free to contribute!

A complete set of all quantisations with fully functional files is available at
https://huggingface.co/maddes8cht/smallcloudai-Refact-1_6B-fim-gguf

https://huggingface.co/maddes8cht/ contains an extensive collection of .gguf converted models with only truly free (Osi-compliant licences) open source LLMs.
The compilation is nicely organised in collections sorted by the source LLMs from which they were inherited.

Sign up or log in to comment