Gkunsch commited on
Commit
59d5ecd
β€’
1 Parent(s): c4c491d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ We are excited to announce the release of our groundbreaking LLM model with a pu
28
  |---------------------|------------------------------------------------------------------|-------------------------|-------------------------------------------------------------------|
29
  | 🐍 **FalconMamba-7B** | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b) | *pretrained model* | 7B parameters pure SSM trained on ~6,000 billion tokens. |
30
  | FalconMamba-7B-Instruct | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) | *instruction/chat model* | Falcon-Mamba-7B finetuned using only SFT.|
31
- | FalconMamba-7B-4bit | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-4bit) | *pretrained model* | 4bit quantized version using GGUF|
32
  | FalconMamba-7B-Instruct-4bit | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct-4bit) | *instruction/chat model* | 4bit quantized version using GGUF.|
33
 
34
 
 
28
  |---------------------|------------------------------------------------------------------|-------------------------|-------------------------------------------------------------------|
29
  | 🐍 **FalconMamba-7B** | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b) | *pretrained model* | 7B parameters pure SSM trained on ~6,000 billion tokens. |
30
  | FalconMamba-7B-Instruct | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) | *instruction/chat model* | Falcon-Mamba-7B finetuned using only SFT.|
31
+ | FalconMamba-7B-4bit | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-4bit) | *pretrained model* | 4bit quantized version using GGUF.|
32
  | FalconMamba-7B-Instruct-4bit | [Here](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct-4bit) | *instruction/chat model* | 4bit quantized version using GGUF.|
33
 
34