🦅 🐍 FalconMamba 7B
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
- Running on Zero62🐍
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 26Note FalconMamba technical report
tiiuae/falcon-mamba-7b
Text Generation • Updated • 7.4k • 213Note First strong attention free model for general purpose usage, based on Mamba1 architecture
tiiuae/falcon-mamba-7b-instruct
Text Generation • Updated • 2.63k • 63Note FalconMamba-7B fine-tuned on instruction data, for chat-like interaction with the model
tiiuae/falcon-mamba-7b-4bit
Text Generation • Updated • 148 • 11Note FalconMamba-7B quantized in 4bit precision using `bitsandbytes` library for lighter memory requirements and smaller GPU hardwares
tiiuae/falcon-mamba-7b-instruct-4bit
Updated • 103 • 11Note FalconMamba-7B-instruct quantized in 4bit precision using `bitsandbytes` library for lighter memory requirements and smaller GPU hardwares
tiiuae/falcon-mamba-7b-instruct-BF16-GGUF
Updated • 111 • 1Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in BF16 format
tiiuae/falcon-mamba-7b-instruct-F16-GGUF
Updated • 132 • 1Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in F16 format
tiiuae/falcon-mamba-7b-instruct-Q8_0-GGUF
Updated • 259 • 5Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in quantized Q8_0 format
tiiuae/falcon-mamba-7b-instruct-Q4_K_M-GGUF
Updated • 4.61k • 3Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in quantized Q4_K_M format
tiiuae/falcon-mamba-7b-BF16-GGUF
Updated • 110 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in BF16 format
tiiuae/falcon-mamba-7b-F16-GGUF
Updated • 113 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in F16 format
tiiuae/falcon-mamba-7b-Q8_0-GGUF
Updated • 212 • 2Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in quantized Q8_0 format
tiiuae/falcon-mamba-7b-Q4_K_M-GGUF
Updated • 66 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in quantized Q4_K_M format
tiiuae/falcon-mamba-7b-pre-decay
Updated • 32 • 3Note Pre-decay stage checkpoint useful for continuous pretraining