Triangle104
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,32 @@ library_name: transformers
|
|
19 |
This model was converted to GGUF format from [`tiiuae/Falcon3-1B-Instruct`](https://huggingface.co/tiiuae/Falcon3-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/tiiuae/Falcon3-1B-Instruct) for more details on the model.
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
## Use with llama.cpp
|
23 |
Install llama.cpp through brew (works on Mac and Linux)
|
24 |
|
|
|
19 |
This model was converted to GGUF format from [`tiiuae/Falcon3-1B-Instruct`](https://huggingface.co/tiiuae/Falcon3-1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/tiiuae/Falcon3-1B-Instruct) for more details on the model.
|
21 |
|
22 |
+
---
|
23 |
+
Model details:
|
24 |
+
-
|
25 |
+
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
|
26 |
+
|
27 |
+
This repository contains the Falcon3-1B-Instruct. It achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-1B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 8K.
|
28 |
+
Model Details
|
29 |
+
|
30 |
+
Architecture
|
31 |
+
|
32 |
+
Transformer-based causal decoder-only architecture
|
33 |
+
18 decoder blocks
|
34 |
+
Grouped Query Attention (GQA) for faster inference: 8 query heads and 4 key-value heads
|
35 |
+
Wider head dimension: 256
|
36 |
+
High RoPE value to support long context understanding: 1000042
|
37 |
+
Uses SwiGLU and RMSNorm
|
38 |
+
8K context length
|
39 |
+
131K vocab size
|
40 |
+
Pruned and healed using larger Falcon models (3B and 7B respectively) on only 80 Gigatokens of datasets comprising of web, code, STEM, high quality and multilingual data using 256 H100 GPU chips
|
41 |
+
Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
|
42 |
+
Supports EN, FR, ES, PT
|
43 |
+
Developed by Technology Innovation Institute
|
44 |
+
License: TII Falcon-LLM License 2.0
|
45 |
+
Model Release Date: December 2024
|
46 |
+
|
47 |
+
---
|
48 |
## Use with llama.cpp
|
49 |
Install llama.cpp through brew (works on Mac and Linux)
|
50 |
|