exl2 quant (measurement.json in main branch)
check revisions for quants
Falcon3-10B-Instruct
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
This repository contains the Falcon3-10B-Instruct. It achieves state-of-the-art results (at the time of release) on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-10B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
Model Details
- Architecture
- Transformer-based causal decoder-only architecture
- 40 decoder blocks
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLu and RMSNorm
- 32K context length
- 131K vocab size
- Depth up-scaled from Falcon3-7B-Base with 2 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
- Supports EN, FR, ES, PT
- Developed by Technology Innovation Institute
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
Getting started
Click to expand
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "tiiuae/Falcon3-10B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many hours in one day?"
messages = [
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Benchmarks
We report in the following table our internal pipeline benchmarks.
- We use lm-evaluation harness.
- We report raw scores obtained by applying chat template without fewshot_as_multiturn (unlike Llama3.1).
- We use same batch-size across all models.
Category | Benchmark | Yi-1.5-9B-Chat | Mistral-Nemo-Base-2407 (12B) | Falcon3-10B-Instruct |
---|---|---|---|---|
General | MMLU (5-shot) | 70 | 65.9 | 71.6 |
MMLU-PRO (5-shot) | 39.6 | 32.7 | 44 | |
IFEval | 57.6 | 63.4 | 78 | |
Math | GSM8K (5-shot) | 76.6 | 73.8 | 83.1 |
GSM8K (8-shot, COT) | 78.5 | 73.6 | 81.3 | |
MATH Lvl-5 (4-shot) | 8.8 | 0.4 | 22.1 | |
Reasoning | Arc Challenge (25-shot) | 51.9 | 61.6 | 64.5 |
GPQA (0-shot) | 35.4 | 33.2 | 33.5 | |
GPQA (0-shot, COT) | 16 | 12.7 | 32.6 | |
MUSR (0-shot) | 41.9 | 38.1 | 41.1 | |
BBH (3-shot) | 49.2 | 43.6 | 58.4 | |
CommonSense Understanding | PIQA (0-shot) | 76.4 | 78.2 | 78.4 |
SciQ (0-shot) | 61.7 | 76.4 | 90.4 | |
Winogrande (0-shot) | - | - | 71.3 | |
OpenbookQA (0-shot) | 43.2 | 47.4 | 48.2 | |
Instructions following | MT-Bench (avg) | 8.28 | 8.6 | 8.17 |
Alpaca (WC) | 25.81 | 45.44 | 24.7 | |
Tool use | BFCL AST (avg) | 48.4 | 74.2 | 86.3 |
Code | EvalPlus (0-shot) (avg) | 69.4 | 58.9 | 74.7 |
Multipl-E (0-shot) (avg) | - | 34.5 | 45.8 |
Useful links
- View our release blogpost.
- Feel free to join our discord server if you have any questions or to interact with our researchers and developers.
Technical Report
Coming soon....
Citation
If Falcon3 family were helpful in your work, feel free to give us a cite.
@misc{Falcon3,
title = {The Falcon 3 family of Open Models},
author = {TII Team},
month = {December},
year = {2024}
}
Model tree for lucyknada/tiiuae_Falcon3-10B-Instruct-exl2
Base model
tiiuae/Falcon3-10B-Base