Falcon 40B Base Model GGUF
These files are GGUF format quantized model files for TII's tiiuae/Falcon 40B base model.
About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
- llama.cpp.
- text-generation-webui, the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
- KoboldCpp, now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
- LM Studio, version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
- LoLLMS Web UI, should now work, choose the
c_transformers
backend. A great web UI with many interesting features. Supports CUDA GPU acceleration. - ctransformers, now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
- llama-cpp-python, supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
- candle, added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
The clients and libraries below are expecting to add GGUF support shortly:
Repositories available
- Downloads last month
- 200