kaetemi commited on
Commit
224a3c3
1 Parent(s): 9859614

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +155 -0
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - es
6
+ - pt
7
+ license: other
8
+ library_name: transformers
9
+ tags:
10
+ - falcon3
11
+ - llama-cpp
12
+ - gguf-my-repo
13
+ license_name: falcon-llm-license
14
+ license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
15
+ base_model: tiiuae/Falcon3-10B-Base
16
+ model-index:
17
+ - name: Falcon3-10B-Base
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ name: Text Generation
22
+ dataset:
23
+ name: IFEval (0-Shot)
24
+ type: HuggingFaceH4/ifeval
25
+ args:
26
+ num_few_shot: 0
27
+ metrics:
28
+ - type: inst_level_strict_acc and prompt_level_strict_acc
29
+ value: 36.48
30
+ name: strict accuracy
31
+ source:
32
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Base
33
+ name: Open LLM Leaderboard
34
+ - task:
35
+ type: text-generation
36
+ name: Text Generation
37
+ dataset:
38
+ name: BBH (3-Shot)
39
+ type: BBH
40
+ args:
41
+ num_few_shot: 3
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 41.38
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Base
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MATH Lvl 5 (4-Shot)
54
+ type: hendrycks/competition_math
55
+ args:
56
+ num_few_shot: 4
57
+ metrics:
58
+ - type: exact_match
59
+ value: 24.77
60
+ name: exact match
61
+ source:
62
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Base
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: GPQA (0-shot)
69
+ type: Idavidrein/gpqa
70
+ args:
71
+ num_few_shot: 0
72
+ metrics:
73
+ - type: acc_norm
74
+ value: 12.75
75
+ name: acc_norm
76
+ source:
77
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Base
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: MuSR (0-shot)
84
+ type: TAUR-Lab/MuSR
85
+ args:
86
+ num_few_shot: 0
87
+ metrics:
88
+ - type: acc_norm
89
+ value: 14.17
90
+ name: acc_norm
91
+ source:
92
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Base
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: MMLU-PRO (5-shot)
99
+ type: TIGER-Lab/MMLU-Pro
100
+ config: main
101
+ split: test
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 36.0
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Base
110
+ name: Open LLM Leaderboard
111
+ ---
112
+
113
+ # kaetemi/Falcon3-10B-Base-Q8_0-GGUF
114
+ This model was converted to GGUF format from [`tiiuae/Falcon3-10B-Base`](https://huggingface.co/tiiuae/Falcon3-10B-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
115
+ Refer to the [original model card](https://huggingface.co/tiiuae/Falcon3-10B-Base) for more details on the model.
116
+
117
+ ## Use with llama.cpp
118
+ Install llama.cpp through brew (works on Mac and Linux)
119
+
120
+ ```bash
121
+ brew install llama.cpp
122
+
123
+ ```
124
+ Invoke the llama.cpp server or the CLI.
125
+
126
+ ### CLI:
127
+ ```bash
128
+ llama-cli --hf-repo kaetemi/Falcon3-10B-Base-Q8_0-GGUF --hf-file falcon3-10b-base-q8_0.gguf -p "The meaning to life and the universe is"
129
+ ```
130
+
131
+ ### Server:
132
+ ```bash
133
+ llama-server --hf-repo kaetemi/Falcon3-10B-Base-Q8_0-GGUF --hf-file falcon3-10b-base-q8_0.gguf -c 2048
134
+ ```
135
+
136
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
137
+
138
+ Step 1: Clone llama.cpp from GitHub.
139
+ ```
140
+ git clone https://github.com/ggerganov/llama.cpp
141
+ ```
142
+
143
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
144
+ ```
145
+ cd llama.cpp && LLAMA_CURL=1 make
146
+ ```
147
+
148
+ Step 3: Run inference through the main binary.
149
+ ```
150
+ ./llama-cli --hf-repo kaetemi/Falcon3-10B-Base-Q8_0-GGUF --hf-file falcon3-10b-base-q8_0.gguf -p "The meaning to life and the universe is"
151
+ ```
152
+ or
153
+ ```
154
+ ./llama-server --hf-repo kaetemi/Falcon3-10B-Base-Q8_0-GGUF --hf-file falcon3-10b-base-q8_0.gguf -c 2048
155
+ ```