GGUF
English
llama-cpp
gguf-my-repo
Eval Results
Inference Endpoints
Triangle104 commited on
Commit
fdd1632
1 Parent(s): dbfbd11

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +153 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: tiiuae/falcon-mamba-7b
3
+ datasets:
4
+ - tiiuae/falcon-refinedweb
5
+ - HuggingFaceFW/fineweb-edu
6
+ language:
7
+ - en
8
+ license: other
9
+ license_name: falcon-mamba-7b-license
10
+ license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html
11
+ tags:
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ model-index:
15
+ - name: falcon-mamba-7b
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: IFEval (0-Shot)
22
+ type: HuggingFaceH4/ifeval
23
+ args:
24
+ num_few_shot: 0
25
+ metrics:
26
+ - type: inst_level_strict_acc and prompt_level_strict_acc
27
+ value: 33.36
28
+ name: strict accuracy
29
+ source:
30
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
31
+ name: Open LLM Leaderboard
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: BBH (3-Shot)
37
+ type: BBH
38
+ args:
39
+ num_few_shot: 3
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 19.88
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MATH Lvl 5 (4-Shot)
52
+ type: hendrycks/competition_math
53
+ args:
54
+ num_few_shot: 4
55
+ metrics:
56
+ - type: exact_match
57
+ value: 3.63
58
+ name: exact match
59
+ source:
60
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
61
+ name: Open LLM Leaderboard
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: GPQA (0-shot)
67
+ type: Idavidrein/gpqa
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: acc_norm
72
+ value: 8.05
73
+ name: acc_norm
74
+ source:
75
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: MuSR (0-shot)
82
+ type: TAUR-Lab/MuSR
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: acc_norm
87
+ value: 10.86
88
+ name: acc_norm
89
+ source:
90
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: MMLU-PRO (5-shot)
97
+ type: TIGER-Lab/MMLU-Pro
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 14.47
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
108
+ name: Open LLM Leaderboard
109
+ ---
110
+
111
+ # Triangle104/falcon-mamba-7b-Q5_K_M-GGUF
112
+ This model was converted to GGUF format from [`tiiuae/falcon-mamba-7b`](https://huggingface.co/tiiuae/falcon-mamba-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
113
+ Refer to the [original model card](https://huggingface.co/tiiuae/falcon-mamba-7b) for more details on the model.
114
+
115
+ ## Use with llama.cpp
116
+ Install llama.cpp through brew (works on Mac and Linux)
117
+
118
+ ```bash
119
+ brew install llama.cpp
120
+
121
+ ```
122
+ Invoke the llama.cpp server or the CLI.
123
+
124
+ ### CLI:
125
+ ```bash
126
+ llama-cli --hf-repo Triangle104/falcon-mamba-7b-Q5_K_M-GGUF --hf-file falcon-mamba-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
127
+ ```
128
+
129
+ ### Server:
130
+ ```bash
131
+ llama-server --hf-repo Triangle104/falcon-mamba-7b-Q5_K_M-GGUF --hf-file falcon-mamba-7b-q5_k_m.gguf -c 2048
132
+ ```
133
+
134
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
135
+
136
+ Step 1: Clone llama.cpp from GitHub.
137
+ ```
138
+ git clone https://github.com/ggerganov/llama.cpp
139
+ ```
140
+
141
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
142
+ ```
143
+ cd llama.cpp && LLAMA_CURL=1 make
144
+ ```
145
+
146
+ Step 3: Run inference through the main binary.
147
+ ```
148
+ ./llama-cli --hf-repo Triangle104/falcon-mamba-7b-Q5_K_M-GGUF --hf-file falcon-mamba-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
149
+ ```
150
+ or
151
+ ```
152
+ ./llama-server --hf-repo Triangle104/falcon-mamba-7b-Q5_K_M-GGUF --hf-file falcon-mamba-7b-q5_k_m.gguf -c 2048
153
+ ```