Falcon3-7B-Base / README.md
puneeshkhanna's picture
Update README.md
a9c965c verified
|
raw
history blame
5.26 kB
metadata
language:
  - en
  - fr
  - es
  - pt
tags:
  - falcon3
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html

Falcon3-7B-Base

Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.

This repository contains the Falcon3-7B-Base. It achieves state of art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-7B-Base supports 4 languages (english, french, spanish, portuguese) and a context length up to 32K.

⚠️ This is a raw, pretrained model, which should be further finetuned using SFT, RLHF, continued pretraining, etc. for most usecases.

Model Details

  • Architecture
    • Transformer based causal decoder only architecture
    • 28 decoder blocks
    • Grouped query attention (GQA) for faster inference: 12 query heads and 4 key value heads
    • Wider head dimension: 256
    • High RoPE value to support long context understanding: 1000042
    • Uses SwiGLU and RMSNorm
    • 32K context length
    • 131K vocab size
  • Pretrained on 14 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 2048 H100 GPU chips
  • Supports EN, FR, ES, PT
  • Developed by Technology Innovation Institute
  • License: TII Falcon-LLM License 2.0
  • Model Release Date: December 2024

Getting started

Click to expand
import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation", 
    model="tiiuae/Falcon3-7B-Base", 
    torch_dtype=torch.bfloat16, 
    device_map="auto"
)
response = pipe("Question: How many hours in one day? Answer: ")
print(response[0]['generated_text'])

Benchmarks

We report in the following table our internal pipeline benchmarks:

Category Benchmark Llama3.1-8B Qwen2.5-7B gemma-2-9b Falcon3-7B-Base
General MMLU (5-shot) 65.2 74.2 70.8 67.5
MMLU-PRO (5-shot) 32.7 43.5 41.4 39.2
IFEval 12.0 33.9 21.2 34.3
Math GSM8K (5-shot) 49.4 82.9 69.1 76.2
MATH Lvl-5 (4-shot) 4.1 15.5 10.5 18.0
Reasoning Arc Challenge (25-shot) 58.2 63.2 67.5 63.1
GPQA (0-shot) 31.0 33.0 33.4 35.5
MUSR (0-shot) 38.0 44.2 45.3 47.3
BBH (3-shot) 46.5 54.0 54.3 51.0
CommonSense Understanding PIQA (0-shot) 81.2 79.9 82.9 79.1
SciQ (0-shot) 94.6 95.2 97.1 92.4
Winogrande (0-shot) 74.0 72.9 74.2 71.0
OpenbookQA (0-shot) 44.8 47.0 47.2 43.8

Technical Report

Coming soon....

Citation

If Falcon3 family were helpful to your work, feel free to give us a cite.

@misc{Falcon3,
    title = {The Falcon 3 family of Open Models},
    author = {TII Team},
    month = {December},
    year = {2024}
}