Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ datasets:
|
|
11 |
|
12 |
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/>
|
13 |
|
14 |
-
**GGUF quantization of [`falcon-mamba-7b-instruct`](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct)
|
15 |
|
16 |
# Table of Contents
|
17 |
|
@@ -40,6 +40,14 @@ datasets:
|
|
40 |
|
41 |
Refer to the documentation of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) to understand how to run this model locally on your machine.
|
42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
# Training Details
|
44 |
|
45 |
## Training Data
|
|
|
11 |
|
12 |
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/>
|
13 |
|
14 |
+
**GGUF quantization of [`falcon-mamba-7b-instruct`](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) in the formats `F16` - `BF16` and `Q8_0`**
|
15 |
|
16 |
# Table of Contents
|
17 |
|
|
|
40 |
|
41 |
Refer to the documentation of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) to understand how to run this model locally on your machine.
|
42 |
|
43 |
+
Download the GGUF weights with the command below:
|
44 |
+
|
45 |
+
```bash
|
46 |
+
huggingface-cli download tiiuae/falcon-mamba-7b-instruct-GGUF --include FILENAME --local-dir ./
|
47 |
+
```
|
48 |
+
|
49 |
+
with `FILENAME` being the filename you want to download locally.
|
50 |
+
|
51 |
# Training Details
|
52 |
|
53 |
## Training Data
|