Files changed (5) hide show
  1. README.md +80 -98
  2. config.json +1 -0
  3. special_tokens_map.json +1 -1
  4. tokenizer.json +2 -2
  5. tokenizer_config.json +4 -4
README.md CHANGED
@@ -6,35 +6,25 @@ language:
6
  - pt
7
  tags:
8
  - falcon3
9
- base_model: tiiuae/Falcon3-7B-Base
10
- license: other
11
- license_name: falcon-llm-license
12
- license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
13
- library_name: transformers
14
  ---
15
 
16
- <div align="center">
17
- <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
18
- </div>
19
-
20
  # Falcon3-7B-Instruct
21
 
22
  **Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.
23
 
24
- This repository contains the **Falcon3-7B-Instruct**. It achieves state of art results (at the time of release) on reasoning, language understanding, instruction following, code and mathematics tasks.
25
  Falcon3-7B-Instruct supports 4 languages (english, french, spanish, portuguese) and a context length up to 32K.
26
 
27
  ## Model Details
28
  - Architecture
29
  - Transformer based causal decoder only architecture
30
  - 28 decoder blocks
31
- - Grouped query attention (GQA) for faster inference: 12 query heads and 4 key value heads
32
  - Wider head dimension: 256
33
  - High RoPE value to support long context understanding: 1000042
34
- - Uses SwiGLU and RMSNorm
35
- - 32K context length
36
- - 131K vocab size
37
- - Pretrained on 14 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
38
  - Postrained on 1.2 million samples of STEM, conversations, code, safety and function call data
39
  - Supports EN, FR, ES, PT
40
  - Developed by [Technology Innovation Institute](https://www.tii.ae)
@@ -58,7 +48,7 @@ model_name = "tiiuae/Falcon3-7B-Instruct"
58
  model = AutoModelForCausalLM.from_pretrained(
59
  model_name,
60
  torch_dtype="auto",
61
- device_map="auto"]
62
  )
63
  tokenizer = AutoTokenizer.from_pretrained(model_name)
64
 
@@ -90,11 +80,8 @@ print(response)
90
 
91
  <br>
92
 
93
- ## Benchmarks
94
- We report in the following table our internal pipeline benchmarks.
95
- - We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
96
- - We report **raw scores** obtained by applying chat template **without fewshot_as_multiturn** (unlike Llama3.1).
97
- - We use same batch-size across all models.
98
 
99
  <table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
100
  <colgroup>
@@ -102,6 +89,8 @@ We report in the following table our internal pipeline benchmarks.
102
  <col style="width: 10%;">
103
  <col style="width: 7%;">
104
  <col style="width: 7%;">
 
 
105
  <col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
106
  </colgroup>
107
  <thead>
@@ -109,7 +98,9 @@ We report in the following table our internal pipeline benchmarks.
109
  <th>Category</th>
110
  <th>Benchmark</th>
111
  <th>Llama-3.1-8B-Instruct</th>
 
112
  <th>Qwen2.5-7B-Instruct</th>
 
113
  <th>Falcon3-7B-Instruct</th>
114
  </tr>
115
  </thead>
@@ -117,125 +108,116 @@ We report in the following table our internal pipeline benchmarks.
117
  <tr>
118
  <td rowspan="3">General</td>
119
  <td>MMLU (5-shot)</td>
120
- <td>55.9</td>
121
- <td><b>72.4</b></td>
122
- <td>68</td>
 
 
123
  </tr>
124
  <tr>
125
  <td>MMLU-PRO (5-shot)</td>
126
- <td>21.8</td>
127
- <td>35.8</td>
128
- <td><b>40.7</b></td>
 
 
129
  </tr>
130
  <tr>
131
  <td>IFEval</td>
132
- <td><b>78.8</b></td>
133
- <td>74.7</td>
134
- <td>76.5</td>
 
 
135
  </tr>
136
  <tr>
137
- <td rowspan="3">Math</td>
138
  <td>GSM8K (5-shot)</td>
139
- <td>78.1</td>
140
- <td>77.5</td>
141
- <td><b>79.1</b></td>
142
- </tr>
143
- <tr>
144
- <td>GSM8K (8-shot, COT)</td>
145
- <td>79.8</td>
146
- <td>72.7</td>
147
- <td><b>80.9</b></td>
148
  </tr>
149
  <tr>
150
- <td>MATH Lvl-5 (4-shot)</td>
151
- <td>10.4</td>
152
- <td>26</td>
153
- <td><b>29.4</b></td>
 
 
154
  </tr>
155
  <tr>
156
- <td rowspan="5">Reasoning</td>
157
  <td>Arc Challenge (25-shot)</td>
158
- <td>46.6</td>
159
- <td>55.7</td>
160
- <td><b>65.9</b></td>
 
 
161
  </tr>
162
  <tr>
163
  <td>GPQA (0-shot)</td>
164
- <td><b>33.6</b></td>
165
- <td>31.9</td>
166
- <td>32</td>
167
- </tr>
168
- <tr>
169
- <td>GPQA (0-shot, COT)</td>
170
- <td>9.6</td>
171
- <td>13.8</td>
172
- <td><b>22.3</b></td>
173
  </tr>
174
  <tr>
175
  <td>MUSR (0-shot)</td>
176
- <td>38.6</td>
177
- <td>40.7</td>
178
- <td><b>46.4</b></td>
 
 
179
  </tr>
180
  <tr>
181
  <td>BBH (3-shot)</td>
182
- <td>43.7</td>
183
- <td><b>53.9</b></td>
184
- <td>52.4</td>
 
 
185
  </tr>
186
  <tr>
187
  <td rowspan="4">CommonSense Understanding</td>
188
  <td>PIQA (0-shot)</td>
189
- <td><b>78.9</b></td>
190
- <td>73.7</td>
191
- <td>78.8</td>
 
 
192
  </tr>
193
  <tr>
194
  <td>SciQ (0-shot)</td>
195
- <td>80.2</td>
196
- <td>50.9</td>
197
- <td><b>94.7</b></td>
 
 
198
  </tr>
199
  <tr>
200
  <td>Winogrande (0-shot)</td>
201
  <td>-</td>
202
  <td>-</td>
203
- <td>70.4</td>
 
 
204
  </tr>
205
  <tr>
206
  <td>OpenbookQA (0-shot)</td>
207
- <td><b>46.2</b></td>
208
- <td>42.4</td>
209
- <td>45.8</td>
210
- </tr>
211
- <tr>
212
- <td rowspan="2">Instructions following</td>
213
- <td>MT-Bench (avg)</td>
214
- <td>7.9</td>
215
- <td><b>8.5</b></td>
216
- <td>8.4</td>
217
- </tr>
218
- <tr>
219
- <td>Alpaca (WC)</td>
220
- <td>26.6</td>
221
- <td><b>31.5</b></td>
222
- <td>26.1</td>
223
- </tr>
224
- <tr>
225
- <td>Tool use</td>
226
- <td>BFCL AST (avg)</td>
227
- <td>90.6</td>
228
- <td><b>91.4</b></td>
229
- <td>72.3</td>
230
  </tr>
231
  </tbody>
232
  </table>
233
 
234
 
235
- ## Technical Report
236
- Coming soon....
237
-
238
- ## Citation
239
  If Falcon3 family were helpful to your work, feel free to give us a cite.
240
 
241
  ```
@@ -245,4 +227,4 @@ If Falcon3 family were helpful to your work, feel free to give us a cite.
245
  month = {December},
246
  year = {2024}
247
  }
248
- ```
 
6
  - pt
7
  tags:
8
  - falcon3
 
 
 
 
 
9
  ---
10
 
 
 
 
 
11
  # Falcon3-7B-Instruct
12
 
13
  **Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.
14
 
15
+ This repository contains the **Falcon3-7B-Instruct**. It achieves state of art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks.
16
  Falcon3-7B-Instruct supports 4 languages (english, french, spanish, portuguese) and a context length up to 32K.
17
 
18
  ## Model Details
19
  - Architecture
20
  - Transformer based causal decoder only architecture
21
  - 28 decoder blocks
22
+ - Grouped query attention (GQA) for faster inference: 12 query heads and 4 KV heads
23
  - Wider head dimension: 256
24
  - High RoPE value to support long context understanding: 1000042
25
+ - 32k context length
26
+ - 131k vocab size
27
+ - Pretrained on 14 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 2048 H100 GPU chips
 
28
  - Postrained on 1.2 million samples of STEM, conversations, code, safety and function call data
29
  - Supports EN, FR, ES, PT
30
  - Developed by [Technology Innovation Institute](https://www.tii.ae)
 
48
  model = AutoModelForCausalLM.from_pretrained(
49
  model_name,
50
  torch_dtype="auto",
51
+ device_map="auto"
52
  )
53
  tokenizer = AutoTokenizer.from_pretrained(model_name)
54
 
 
80
 
81
  <br>
82
 
83
+ # Benchmarks
84
+ We report in the following table our internal pipeline benchmarks:
 
 
 
85
 
86
  <table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
87
  <colgroup>
 
89
  <col style="width: 10%;">
90
  <col style="width: 7%;">
91
  <col style="width: 7%;">
92
+ <col style="width: 7%;">
93
+ <col style="width: 7%;">
94
  <col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
95
  </colgroup>
96
  <thead>
 
98
  <th>Category</th>
99
  <th>Benchmark</th>
100
  <th>Llama-3.1-8B-Instruct</th>
101
+ <th>Qwen2-7B-Instruct</th>
102
  <th>Qwen2.5-7B-Instruct</th>
103
+ <th>gemma-2-9b-it</th>
104
  <th>Falcon3-7B-Instruct</th>
105
  </tr>
106
  </thead>
 
108
  <tr>
109
  <td rowspan="3">General</td>
110
  <td>MMLU (5-shot)</td>
111
+ <td>-</td>
112
+ <td>-</td>
113
+ <td>-</td>
114
+ <td>-</td>
115
+ <td>-</td>
116
  </tr>
117
  <tr>
118
  <td>MMLU-PRO (5-shot)</td>
119
+ <td>-</td>
120
+ <td>-</td>
121
+ <td>-</td>
122
+ <td>-</td>
123
+ <td>-</td>
124
  </tr>
125
  <tr>
126
  <td>IFEval</td>
127
+ <td>-</td>
128
+ <td>-</td>
129
+ <td>-</td>
130
+ <td>-</td>
131
+ <td>-</td>
132
  </tr>
133
  <tr>
134
+ <td rowspan="2">Math</td>
135
  <td>GSM8K (5-shot)</td>
136
+ <td>-</td>
137
+ <td>-</td>
138
+ <td>-</td>
139
+ <td>-</td>
140
+ <td>-</td>
 
 
 
 
141
  </tr>
142
  <tr>
143
+ <td>MATH(4-shot)</td>
144
+ <td>-</td>
145
+ <td>-</td>
146
+ <td>-</td>
147
+ <td>-</td>
148
+ <td>-</td>
149
  </tr>
150
  <tr>
151
+ <td rowspan="4">Reasoning</td>
152
  <td>Arc Challenge (25-shot)</td>
153
+ <td>-</td>
154
+ <td>-</td>
155
+ <td>-</td>
156
+ <td>-</td>
157
+ <td>-</td>
158
  </tr>
159
  <tr>
160
  <td>GPQA (0-shot)</td>
161
+ <td>-</td>
162
+ <td>-</td>
163
+ <td>-</td>
164
+ <td>-</td>
165
+ <td>-</td>
 
 
 
 
166
  </tr>
167
  <tr>
168
  <td>MUSR (0-shot)</td>
169
+ <td>-</td>
170
+ <td>-</td>
171
+ <td>-</td>
172
+ <td>-</td>
173
+ <td>-</td>
174
  </tr>
175
  <tr>
176
  <td>BBH (3-shot)</td>
177
+ <td>-</td>
178
+ <td>-</td>
179
+ <td>-</td>
180
+ <td>-</td>
181
+ <td>-</td>
182
  </tr>
183
  <tr>
184
  <td rowspan="4">CommonSense Understanding</td>
185
  <td>PIQA (0-shot)</td>
186
+ <td>-</td>
187
+ <td>-</td>
188
+ <td>-</td>
189
+ <td>-</td>
190
+ <td>-</td>
191
  </tr>
192
  <tr>
193
  <td>SciQ (0-shot)</td>
194
+ <td>-</td>
195
+ <td>-</td>
196
+ <td>-</td>
197
+ <td>-</td>
198
+ <td>-</td>
199
  </tr>
200
  <tr>
201
  <td>Winogrande (0-shot)</td>
202
  <td>-</td>
203
  <td>-</td>
204
+ <td>-</td>
205
+ <td>-</td>
206
+ <td>-</td>
207
  </tr>
208
  <tr>
209
  <td>OpenbookQA (0-shot)</td>
210
+ <td>-</td>
211
+ <td>-</td>
212
+ <td>-</td>
213
+ <td>-</td>
214
+ <td>-</td>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
215
  </tr>
216
  </tbody>
217
  </table>
218
 
219
 
220
+ # Citation
 
 
 
221
  If Falcon3 family were helpful to your work, feel free to give us a cite.
222
 
223
  ```
 
227
  month = {December},
228
  year = {2024}
229
  }
230
+ ```
config.json CHANGED
@@ -9,6 +9,7 @@
9
  "head_dim": 256,
10
  "hidden_act": "silu",
11
  "hidden_size": 3072,
 
12
  "intermediate_size": 23040,
13
  "max_position_embeddings": 32768,
14
  "mlp_bias": false,
 
9
  "head_dim": 256,
10
  "hidden_act": "silu",
11
  "hidden_size": 3072,
12
+ "initializer_range": 0.02,
13
  "intermediate_size": 23040,
14
  "max_position_embeddings": 32768,
15
  "mlp_bias": false,
special_tokens_map.json CHANGED
@@ -32,7 +32,7 @@
32
  "single_word": false
33
  },
34
  "pad_token": {
35
- "content": "<|pad|>",
36
  "lstrip": false,
37
  "normalized": false,
38
  "rstrip": false,
 
32
  "single_word": false
33
  },
34
  "pad_token": {
35
+ "content": "<|endoftext|>",
36
  "lstrip": false,
37
  "normalized": false,
38
  "rstrip": false,
tokenizer.json CHANGED
@@ -18212,7 +18212,7 @@
18212
  },
18213
  {
18214
  "id": 2023,
18215
- "content": "<|pad|>",
18216
  "single_word": false,
18217
  "lstrip": false,
18218
  "rstrip": false,
@@ -20280,7 +20280,7 @@
20280
  ">>UNUSED_1894<<": 2020,
20281
  ">>UNUSED_1895<<": 2021,
20282
  ">>UNUSED_1896<<": 2022,
20283
- "<|pad|>": 2023,
20284
  "!": 2024,
20285
  "\"": 2025,
20286
  "#": 2026,
 
18212
  },
18213
  {
18214
  "id": 2023,
18215
+ "content": ">>UNUSED_1897<<",
18216
  "single_word": false,
18217
  "lstrip": false,
18218
  "rstrip": false,
 
20280
  ">>UNUSED_1894<<": 2020,
20281
  ">>UNUSED_1895<<": 2021,
20282
  ">>UNUSED_1896<<": 2022,
20283
+ ">>UNUSED_1897<<": 2023,
20284
  "!": 2024,
20285
  "\"": 2025,
20286
  "#": 2026,
tokenizer_config.json CHANGED
@@ -16186,7 +16186,7 @@
16186
  "special": true
16187
  },
16188
  "2023": {
16189
- "content": "<|pad|>",
16190
  "lstrip": false,
16191
  "normalized": false,
16192
  "rstrip": false,
@@ -16219,7 +16219,7 @@
16219
  ">>PASSWORD<<",
16220
  ">>KEY<<"
16221
  ],
16222
- "chat_template": "{% if tools %}{% for message in messages %}{% if message['role'] == 'system' %}{{ '<|system|>\n' + message['content'] + '\nYou are an expert in composing functions. You are given a question and a set of possible functions. \nBased on the question, you will need to make one or more function/tool calls to achieve the purpose. \nIf none of the functions can be used, point it out and refuse to answer. \nIf the given question lacks the parameters required by the function, also point it out.\n\n You have access to the following tools:\n<tools>' + tools|tojson + '</tools>\n\nThe output MUST strictly adhere to the following format, and NO other text MUST be included.\nThe example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please make the tool calls an empty list [].\n<tool_call>[\n{\"name\": \"function_name1\", \"arguments\": {\"argument1\": \"value1\", \"argument2\": \"value2\"}},\n... (more tool calls as required)\n]</tool_call>' }}{% elif message['role'] == 'user' %}{{ '<|user|>\n' + message['content'] + '\n' }}{% elif message['role'] == 'assistant' %}{% if not loop.last %}{{ '<|assistant|>\n' + message['content'] + eos_token + '\n' }}{% else %}{{ '<|assistant|>\n' + message['content'] + eos_token }}{% endif %}{% endif %}{% if loop.last and add_generation_prompt %}{{ '<|assistant|>\n' }}{% endif %}{% endfor %}{% else %}{% for message in messages %}{% if message['role'] == 'system' %}{{ '<|system|>\n' + message['content'] + '\n' }}{% elif message['role'] == 'user' %}{{ '<|user|>\n' + message['content'] + '\n' }}{% elif message['role'] == 'assistant' %}{% if not loop.last %}{{ '<|assistant|>\n' + message['content'] + eos_token + '\n' }}{% else %}{{ '<|assistant|>\n' + message['content'] + eos_token }}{% endif %}{% endif %}{% if loop.last and add_generation_prompt %}{{ '<|assistant|>\n' }}{% endif %}{% endfor %}{% endif %}",
16223
  "clean_up_tokenization_spaces": true,
16224
  "eos_token": "<|endoftext|>",
16225
  "extra_special_tokens": {},
@@ -16227,7 +16227,7 @@
16227
  "input_ids",
16228
  "attention_mask"
16229
  ],
16230
- "model_max_length": 32768,
16231
- "pad_token": "<|pad|>",
16232
  "tokenizer_class": "PreTrainedTokenizerFast"
16233
  }
 
16186
  "special": true
16187
  },
16188
  "2023": {
16189
+ "content": ">>UNUSED_1897<<",
16190
  "lstrip": false,
16191
  "normalized": false,
16192
  "rstrip": false,
 
16219
  ">>PASSWORD<<",
16220
  ">>KEY<<"
16221
  ],
16222
+ "chat_template": "{% for message in messages %}{% if message['role'] == 'system' %}{{ '<|system|>\n' + message['content'] + '\n' }}{% elif message['role'] == 'user' %}{{ '<|user|>\n' + message['content'] + '\n' }}{% elif message['role'] == 'assistant' %}{% if not loop.last %}{{ '<|assistant|>\n' + message['content'] + eos_token + '\n' }}{% else %}{{ '<|assistant|>\n' + message['content'] + eos_token }}{% endif %}{% endif %}{% if loop.last and add_generation_prompt %}{{ '<|assistant|>\n' }}{% endif %}{% endfor %}",
16223
  "clean_up_tokenization_spaces": true,
16224
  "eos_token": "<|endoftext|>",
16225
  "extra_special_tokens": {},
 
16227
  "input_ids",
16228
  "attention_mask"
16229
  ],
16230
+ "model_max_length": 8192,
16231
+ "pad_token": "<|endoftext|>",
16232
  "tokenizer_class": "PreTrainedTokenizerFast"
16233
  }