--- license: cc-by-nc-4.0 --- # Command-R 35B v1.0 - GGUF - Model creator: [CohereForAI](https://huggingface.co/CohereForAI) - Original model: [Command-R 35B v1.0](https://huggingface.co/CohereForAI/c4ai-command-r-v01) ## Description This repo contains llama.cpp GGUF format model files for [Command-R 35B v1.0](https://huggingface.co/CohereForAI/c4ai-command-r-v01). Note: you need to clone llama.cpp and compile until the [PR6033](https://github.com/ggerganov/llama.cpp/pull/6033) is merged upstream: ``` git clone https://github.com/acanis/llama.cpp.git cd llama.cpp mkdir build cd build cmake .. -DLLAMA_CUBLAS=ON cmake --build . --config Release -- -j16 cd .. ``` ## F16 files are split and require joining **Note:** Hugging face does not support uploading files larger than 50GB so I uploaded the GGUF as 2 split files. To join the files, run the following: Linux and macOS: ``` cat c4ai-command-r-v01-f16.gguf-split-* > c4ai-command-r-v01-f16.gguf ``` Then you can remove the split files to save space: ``` rm c4ai-command-r-v01-f16.gguf-split-* ``` Windows command line: ``` COPY /B c4ai-command-r-v01-f16.gguf-split-a + c4ai-command-r-v01-f16.gguf-split-b c4ai-command-r-v01-f16.gguf ``` Then you can remove the split files to save space: ``` del c4ai-command-r-v01-f16.gguf-split-a c4ai-command-r-v01-f16.gguf-split-b ``` You can optionally confirm the checksum of merged c4ai-command-r-v01-f16.gguf with the md5sum file: ``` md5sum -c md5sum ```