|
--- |
|
language: |
|
- en |
|
- fr |
|
- es |
|
- pt |
|
tags: |
|
- falcon3 |
|
base_model: tiiuae/Falcon3-7B-Instruct |
|
license: other |
|
license_name: falcon-llm-license |
|
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html |
|
--- |
|
|
|
<div align="center"> |
|
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/> |
|
</div> |
|
|
|
# Falcon3-7B-Instruct-AWQ |
|
|
|
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters. |
|
|
|
**Falcon3-7B-Instruct** achieves state-of-the-art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks. |
|
Falcon3-7B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K. |
|
|
|
This repository contains the AWQ-quantized 4-bit instruction-tuned 7B Falcon3 model. |
|
|
|
## Model Details |
|
|
|
- Architecture |
|
- Transformer-based causal decoder-only architecture |
|
- 28 decoder blocks |
|
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads |
|
- Wider head dimension: 256 |
|
- High RoPE value to support long context understanding: 1000042 |
|
- Uses SwiGLU and RMSNorm |
|
- 32K context length |
|
- 131K vocab size |
|
- Pretrained on 14 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips |
|
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data |
|
- Supports EN, FR, ES, PT |
|
- Developed by [Technology Innovation Institute](https://www.tii.ae) |
|
- License: TII Falcon-LLM License 2.0 |
|
- Model Release Date: December 2024 |
|
- Quantization: AWQ 4-bit |
|
|
|
|
|
## Getting started |
|
|
|
<details> |
|
<summary> Click to expand </summary> |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
|
|
model_name = "tiiuae/Falcon3-7B-Instruct-AWQ" |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype="auto", |
|
device_map="auto" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
prompt = "How many hours in one day?" |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."}, |
|
{"role": "user", "content": prompt} |
|
] |
|
text = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
generated_ids = model.generate( |
|
**model_inputs, |
|
max_new_tokens=1024 |
|
) |
|
generated_ids = [ |
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
] |
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
print(response) |
|
``` |
|
|
|
</details> |
|
|
|
<br> |
|
|
|
# Benchmarks |
|
We report in the following table our internal pipeline benchmarks: |
|
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;"> |
|
<colgroup> |
|
<col style="width: 10%;"> |
|
<col style="width: 10%;"> |
|
<col style="width: 10%;"> |
|
<col style="width: 10%;"> |
|
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;"> |
|
</colgroup> |
|
<thead> |
|
<tr> |
|
<th>Benchmark</th> |
|
<th>Falcon 3-7B Instruct</th> |
|
<th>Falcon 3-7B Instruct-GPTQ-Int4</th> |
|
<th>Falcon 3-7B Instruct-GPTQ-Int8</th> |
|
<th>Falcon 3-7B Instruct-AWQ</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr> |
|
<td>MMLU</td> |
|
<td>67.7</td> |
|
<td>65.6</td> |
|
<td>67.6</td> |
|
<td>66.4</td> |
|
</tr> |
|
<tr> |
|
<td>MMLU-PRO</td> |
|
<td>40.9</td> |
|
<td>39.1</td> |
|
<td>40.9</td> |
|
<td>39.9</td> |
|
</tr> |
|
<tr> |
|
<td>IFEval</td> |
|
<td>75.1</td> |
|
<td>72.2</td> |
|
<td>77.0</td> |
|
<td>74.8</td> |
|
</tr> |
|
</tbody> |
|
</table> |
|
|
|
## Useful links |
|
- View our [release blogpost](https://huggingface.co/blog/falcon3). |
|
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers. |
|
|
|
## Technical Report |
|
Coming soon.... |
|
|
|
## Citation |
|
If the Falcon3 family of models were helpful to your work, feel free to give us a cite. |
|
|
|
``` |
|
@misc{Falcon3, |
|
title = {The Falcon 3 Family of Open Models}, |
|
url = {https://huggingface.co/blog/falcon3}, |
|
author = {Falcon-LLM Team}, |
|
month = {December}, |
|
year = {2024} |
|
} |
|
``` |