File size: 286 Bytes
d71987b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
---
base_model: state-spaces/mamba-2.8b-hf
library_name: transformers
pipeline_tag: text-generation
model_creator: state-spaces
model_name: mamba-2.8b
model_type: MambaForCausalLM
inference: false
---
# mamba-2.8b-GGUF
Quantized mamba-2.8b models using recent versions of llama.cpp.
|