GGUF
English
llama-cpp
gguf-my-repo
Eval Results
Inference Endpoints
Triangle104 commited on
Commit
a96631e
1 Parent(s): 6b81210

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md CHANGED
@@ -112,6 +112,172 @@ model-index:
112
  This model was converted to GGUF format from [`tiiuae/falcon-mamba-7b`](https://huggingface.co/tiiuae/falcon-mamba-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
113
  Refer to the [original model card](https://huggingface.co/tiiuae/falcon-mamba-7b) for more details on the model.
114
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115
  ## Use with llama.cpp
116
  Install llama.cpp through brew (works on Mac and Linux)
117
 
 
112
  This model was converted to GGUF format from [`tiiuae/falcon-mamba-7b`](https://huggingface.co/tiiuae/falcon-mamba-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
113
  Refer to the [original model card](https://huggingface.co/tiiuae/falcon-mamba-7b) for more details on the model.
114
 
115
+ ---
116
+ Model details:
117
+ -
118
+ Table of Contents
119
+
120
+ TL;DR
121
+ Model Details
122
+ Usage
123
+ Training Details
124
+ Evaluation
125
+
126
+ TL;DR
127
+ Model Details
128
+ Model Description
129
+
130
+ Developed by: https://www.tii.ae
131
+ Model type: Causal decoder-only
132
+ Architecture: Mamba
133
+ Language(s) (NLP): Mainly English
134
+ License: TII Falcon-Mamba License 2.0
135
+
136
+
137
+ Usage
138
+
139
+ Find below some example scripts on how to use the model in transformers (Make sure to have the latest transformers, or the one built from source):
140
+ Using the Pytorch model
141
+ Running the model on a CPU
142
+ Click to expand
143
+
144
+ Running the model on a GPU
145
+ Click to expand
146
+
147
+ Running the model on a GPU using torch.compile
148
+ Click to expand
149
+
150
+ Running the model on a GPU using different precisions
151
+ FP16
152
+ Click to expand
153
+
154
+ 4-bit
155
+ Click to expand
156
+
157
+
158
+ Training Details
159
+ Training Data
160
+
161
+ Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from Refined-Web, a large volume web-only dataset filtered and deduplicated. Similar to the others Falcon suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency. At the last training stage, small portion of high-quality curated data was used to further enhance performance.
162
+
163
+ Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources. In particular, we used samples coming from Fineweb-edu during our last training stage.
164
+
165
+ The data was tokenized with the Falcon-7B/11B tokenizer.
166
+ Training Procedure
167
+
168
+ Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO.
169
+ Training Hyperparameters
170
+ Hyperparameter Value Comment
171
+ Precision bfloat16
172
+ Optimizer AdamW
173
+ Max learning rate 6.4e-4 Following a WSD (warmup-stable-decay) learning rate schedule
174
+ Weight decay 1e-1
175
+ Batch size 2048
176
+
177
+ The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from bmin=128b_{\mathrm{min}}=128bmin​=128 to bmax=2048b_{\mathrm{max}}=2048bmax​=2048 during first 50 GT of training. In the stable phase we used maximal learning rate ηmax=6.4×10−4\eta_{\mathrm{max}}=6.4 \times 10^{-4}ηmax​=6.4×10−4, and decayed it to the minimal value ηmin=ηmax256\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}ηmin​=256ηmax​​ with exponential schedule over 500 GT. Also, we applied BatchScaling during the rampup — rescaling learning rate η\etaη so that the Adam noise temperature Tnoise≡ηbT_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}Tnoise​≡b
178
+
179
+ ​η​ is kept constant.
180
+ Speeds, Sizes, Times
181
+
182
+ The model training took roughly two months.
183
+
184
+ Evaluation
185
+ Benchmarks
186
+
187
+ We evaluate our model on all benchmarks of the new leaderboard's version using the lm-evaluation-harness package, and then normalize the evaluation results with HuggingFace score normalization.
188
+ model name IFEval BBH MATH LvL5 GPQA MUSR MMLU-PRO Average
189
+ Pure SSM models
190
+ FalconMamba-7B 33.36 19.88 3.63 8.05 10.86 14.47 15.04
191
+ TRI-ML/mamba-7b-rw* 22.46 6.71 0.45 1.12 5.51 1.69 6.25
192
+ Hybrid SSM-attention models
193
+ recurrentgemma-9b 30.76 14.80 4.83 4.70 6.60 17.88 13.20
194
+ Zyphra/Zamba-7B-v1* 24.06 21.12 3.32 3.03 7.74 16.02 12.55
195
+ Transformer models
196
+ Falcon2-11B 32.61 21.94 2.34 2.80 7.53 15.44 13.78
197
+ Meta-Llama-3-8B 14.55 24.50 3.25 7.38 6.24 24.55 13.41
198
+ Meta-Llama-3.1-8B 12.70 25.29 4.61 6.15 8.98 24.95 13.78
199
+ Mistral-7B-v0.1 23.86 22.02 2.49 5.59 10.68 22.36 14.50
200
+ Mistral-Nemo-Base-2407 (12B) 16.83 29.37 4.98 5.82 6.52 27.46 15.08
201
+ gemma-7B 26.59 21.12 6.42 4.92 10.98 21.64 15.28
202
+ RWKV models
203
+ RWKV-v6-Finch-7B* 27.65 9.04 1.11 2.81 2.25 5.85 8.12
204
+ RWKV-v6-Finch-14B* 29.81 12.89 1.13 5.01 3.16 11.3 10.55
205
+
206
+ Also, we evaluate our model on the benchmarks of the first leaderboard using lighteval.
207
+ model name ARC HellaSwag MMLU Winogrande TruthfulQA GSM8K Average
208
+ Pure SSM models
209
+ FalconMamba-7B* 62.03 80.82 62.11 73.64 53.42 52.54 64.09
210
+ TRI-ML/mamba-7b-rw* 51.25 80.85 33.41 71.11 32.08 4.70 45.52
211
+ Hybrid SSM-attention models
212
+ recurrentgemma-9b** 52.00 80.40 60.50 73.60 38.60 42.60 57.95
213
+ Zyphra/Zamba-7B-v1* 56.14 82.23 58.11 79.87 52.88 30.78 60.00
214
+ Transformer models
215
+ Falcon2-11B 59.73 82.91 58.37 78.30 52.56 53.83 64.28
216
+ Meta-Llama-3-8B 60.24 82.23 66.70 78.45 42.93 45.19 62.62
217
+ Meta-Llama-3.1-8B 58.53 82.13 66.43 74.35 44.29 47.92 62.28
218
+ Mistral-7B-v0.1 59.98 83.31 64.16 78.37 42.15 37.83 60.97
219
+ Mistral-Nemo-Base-2407 (12B)* 57.94 82.82 64.43 73.72 49.14 55.27 63.89
220
+ gemma-7B 61.09 82.20 64.56 79.01 44.79 50.87 63.75
221
+ RWKV models
222
+ RWKV-v6-Finch-7B* 43.86 75.19 41.69 68.27 42.19 19.64 48.47
223
+ RWKV-v6-Finch-14B* 47.44 78.86 52.33 71.27 45.45 38.06 55.57
224
+
225
+ Mostly, we took evaluation results from both leaderboards. For the models marked by star we evaluated the tasks internally, while for the models marked by two stars the results were taken from paper or model card.
226
+ Throughput
227
+
228
+ This model can achieve comparable throughput and performance compared to other transformer based models that use optimized kernels such as Flash Attention 2. Make sure to install the optimized Mamba kernels with the following commands:
229
+
230
+ pip install "causal-conv1d>=1.4.0" mamba-ssm
231
+
232
+ Refer to our FalconMamba blogpost for more details about performance evaluation.
233
+
234
+ Technical Specifications
235
+ Model Architecture and Objective
236
+
237
+ Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
238
+
239
+ The model is based on the Mamba architecture (Gu et al., 2023).
240
+ Hyperparameter Value Comment
241
+ Layers 64 Number of layers
242
+ d_model 4096 Hidden dimension
243
+ d_state 16 The SSM state dimension
244
+ Vocabulary 65024 Vocabulary Size
245
+ Sequence length 8192 During the last training stages
246
+ Compute Infrastructure
247
+ Hardware
248
+
249
+ Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances.
250
+ Software
251
+
252
+ Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels.
253
+
254
+ Citation
255
+
256
+ You can use the following bibtex citation:
257
+
258
+ @misc{zuo2024falconmambacompetitiveattentionfree,
259
+ title={Falcon Mamba: The First Competitive Attention-free 7B Language Model},
260
+ author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid},
261
+ year={2024},
262
+ eprint={2410.05355},
263
+ archivePrefix={arXiv},
264
+ primaryClass={cs.CL},
265
+ url={https://arxiv.org/abs/2410.05355},
266
+ }
267
+
268
+ Open LLM Leaderboard Evaluation Results
269
+
270
+ Detailed results can be found here
271
+ Metric Value
272
+ Avg. 15.04
273
+ IFEval (0-Shot) 33.36
274
+ BBH (3-Shot) 19.88
275
+ MATH Lvl 5 (4-Shot) 3.63
276
+ GPQA (0-shot) 8.05
277
+ MuSR (0-shot) 10.86
278
+ MMLU-PRO (5-shot) 14.47
279
+
280
+ ---
281
  ## Use with llama.cpp
282
  Install llama.cpp through brew (works on Mac and Linux)
283