chrisgru commited on
Commit
8de5892
1 Parent(s): cebbc73

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +453 -0
README.md ADDED
@@ -0,0 +1,453 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - ro
5
+ base_model: OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09
6
+ datasets:
7
+ - OpenLLM-Ro/ro_sft_alpaca
8
+ - OpenLLM-Ro/ro_sft_alpaca_gpt4
9
+ - OpenLLM-Ro/ro_sft_dolly
10
+ - OpenLLM-Ro/ro_sft_selfinstruct_gpt4
11
+ - OpenLLM-Ro/ro_sft_norobots
12
+ - OpenLLM-Ro/ro_sft_orca
13
+ - OpenLLM-Ro/ro_sft_camel
14
+ - OpenLLM-Ro/ro_sft_oasst
15
+ - OpenLLM-Ro/ro_sft_ultrachat
16
+ tags:
17
+ - llama-cpp
18
+ - gguf-my-repo
19
+ model-index:
20
+ - name: OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09
21
+ results:
22
+ - task:
23
+ type: text-generation
24
+ dataset:
25
+ name: RoMT-Bench
26
+ type: RoMT-Bench
27
+ metrics:
28
+ - type: Score
29
+ value: 5.42
30
+ name: Score
31
+ - type: Score
32
+ value: 5.95
33
+ name: First turn
34
+ - type: Score
35
+ value: 4.89
36
+ name: Second turn
37
+ - task:
38
+ type: text-generation
39
+ dataset:
40
+ name: RoCulturaBench
41
+ type: RoCulturaBench
42
+ metrics:
43
+ - type: Score
44
+ value: 3.55
45
+ name: Score
46
+ - task:
47
+ type: text-generation
48
+ dataset:
49
+ name: Romanian_Academic_Benchmarks
50
+ type: Romanian_Academic_Benchmarks
51
+ metrics:
52
+ - type: accuracy
53
+ value: 53.03
54
+ name: Average accuracy
55
+ - task:
56
+ type: text-generation
57
+ dataset:
58
+ name: OpenLLM-Ro/ro_arc_challenge
59
+ type: OpenLLM-Ro/ro_arc_challenge
60
+ metrics:
61
+ - type: accuracy
62
+ value: 47.69
63
+ name: Average accuracy
64
+ - type: accuracy
65
+ value: 42.76
66
+ name: 0-shot
67
+ - type: accuracy
68
+ value: 46.44
69
+ name: 1-shot
70
+ - type: accuracy
71
+ value: 48.24
72
+ name: 3-shot
73
+ - type: accuracy
74
+ value: 48.84
75
+ name: 5-shot
76
+ - type: accuracy
77
+ value: 49.36
78
+ name: 10-shot
79
+ - type: accuracy
80
+ value: 50.47
81
+ name: 25-shot
82
+ - task:
83
+ type: text-generation
84
+ dataset:
85
+ name: OpenLLM-Ro/ro_mmlu
86
+ type: OpenLLM-Ro/ro_mmlu
87
+ metrics:
88
+ - type: accuracy
89
+ value: 54.57
90
+ name: Average accuracy
91
+ - type: accuracy
92
+ value: 52.95
93
+ name: 0-shot
94
+ - type: accuracy
95
+ value: 54.62
96
+ name: 1-shot
97
+ - type: accuracy
98
+ value: 55.54
99
+ name: 3-shot
100
+ - type: accuracy
101
+ value: 55.17
102
+ name: 5-shot
103
+ - task:
104
+ type: text-generation
105
+ dataset:
106
+ name: OpenLLM-Ro/ro_winogrande
107
+ type: OpenLLM-Ro/ro_winogrande
108
+ metrics:
109
+ - type: accuracy
110
+ value: 65.84
111
+ name: Average accuracy
112
+ - type: accuracy
113
+ value: 64.4
114
+ name: 0-shot
115
+ - type: accuracy
116
+ value: 66.14
117
+ name: 1-shot
118
+ - type: accuracy
119
+ value: 65.75
120
+ name: 3-shot
121
+ - type: accuracy
122
+ value: 67.09
123
+ name: 5-shot
124
+ - task:
125
+ type: text-generation
126
+ dataset:
127
+ name: OpenLLM-Ro/ro_hellaswag
128
+ type: OpenLLM-Ro/ro_hellaswag
129
+ metrics:
130
+ - type: accuracy
131
+ value: 59.94
132
+ name: Average accuracy
133
+ - type: accuracy
134
+ value: 59.07
135
+ name: 0-shot
136
+ - type: accuracy
137
+ value: 59.26
138
+ name: 1-shot
139
+ - type: accuracy
140
+ value: 60.41
141
+ name: 3-shot
142
+ - type: accuracy
143
+ value: 60.18
144
+ name: 5-shot
145
+ - type: accuracy
146
+ value: 60.77
147
+ name: 10-shot
148
+ - task:
149
+ type: text-generation
150
+ dataset:
151
+ name: OpenLLM-Ro/ro_gsm8k
152
+ type: OpenLLM-Ro/ro_gsm8k
153
+ metrics:
154
+ - type: accuracy
155
+ value: 44.3
156
+ name: Average accuracy
157
+ - type: accuracy
158
+ value: 35.1
159
+ name: 1-shot
160
+ - type: accuracy
161
+ value: 47.01
162
+ name: 3-shot
163
+ - type: accuracy
164
+ value: 50.8
165
+ name: 5-shot
166
+ - task:
167
+ type: text-generation
168
+ dataset:
169
+ name: OpenLLM-Ro/ro_truthfulqa
170
+ type: OpenLLM-Ro/ro_truthfulqa
171
+ metrics:
172
+ - type: accuracy
173
+ value: 45.82
174
+ name: Average accuracy
175
+ - task:
176
+ type: text-generation
177
+ dataset:
178
+ name: LaRoSeDa_binary
179
+ type: LaRoSeDa_binary
180
+ metrics:
181
+ - type: macro-f1
182
+ value: 94.56
183
+ name: Average macro-f1
184
+ - type: macro-f1
185
+ value: 90.18
186
+ name: 0-shot
187
+ - type: macro-f1
188
+ value: 94.45
189
+ name: 1-shot
190
+ - type: macro-f1
191
+ value: 96.36
192
+ name: 3-shot
193
+ - type: macro-f1
194
+ value: 97.27
195
+ name: 5-shot
196
+ - task:
197
+ type: text-generation
198
+ dataset:
199
+ name: LaRoSeDa_multiclass
200
+ type: LaRoSeDa_multiclass
201
+ metrics:
202
+ - type: macro-f1
203
+ value: 60.1
204
+ name: Average macro-f1
205
+ - type: macro-f1
206
+ value: 67.56
207
+ name: 0-shot
208
+ - type: macro-f1
209
+ value: 63.21
210
+ name: 1-shot
211
+ - type: macro-f1
212
+ value: 51.69
213
+ name: 3-shot
214
+ - type: macro-f1
215
+ value: 57.95
216
+ name: 5-shot
217
+ - task:
218
+ type: text-generation
219
+ dataset:
220
+ name: LaRoSeDa_binary_finetuned
221
+ type: LaRoSeDa_binary_finetuned
222
+ metrics:
223
+ - type: macro-f1
224
+ value: 95.12
225
+ name: Average macro-f1
226
+ - task:
227
+ type: text-generation
228
+ dataset:
229
+ name: LaRoSeDa_multiclass_finetuned
230
+ type: LaRoSeDa_multiclass_finetuned
231
+ metrics:
232
+ - type: macro-f1
233
+ value: 87.53
234
+ name: Average macro-f1
235
+ - task:
236
+ type: text-generation
237
+ dataset:
238
+ name: WMT_EN-RO
239
+ type: WMT_EN-RO
240
+ metrics:
241
+ - type: bleu
242
+ value: 21.88
243
+ name: Average bleu
244
+ - type: bleu
245
+ value: 5.12
246
+ name: 0-shot
247
+ - type: bleu
248
+ value: 26.99
249
+ name: 1-shot
250
+ - type: bleu
251
+ value: 27.91
252
+ name: 3-shot
253
+ - type: bleu
254
+ value: 27.51
255
+ name: 5-shot
256
+ - task:
257
+ type: text-generation
258
+ dataset:
259
+ name: WMT_RO-EN
260
+ type: WMT_RO-EN
261
+ metrics:
262
+ - type: bleu
263
+ value: 23.99
264
+ name: Average bleu
265
+ - type: bleu
266
+ value: 1.63
267
+ name: 0-shot
268
+ - type: bleu
269
+ value: 22.59
270
+ name: 1-shot
271
+ - type: bleu
272
+ value: 35.7
273
+ name: 3-shot
274
+ - type: bleu
275
+ value: 36.05
276
+ name: 5-shot
277
+ - task:
278
+ type: text-generation
279
+ dataset:
280
+ name: WMT_EN-RO_finetuned
281
+ type: WMT_EN-RO_finetuned
282
+ metrics:
283
+ - type: bleu
284
+ value: 28.27
285
+ name: Average bleu
286
+ - task:
287
+ type: text-generation
288
+ dataset:
289
+ name: WMT_RO-EN_finetuned
290
+ type: WMT_RO-EN_finetuned
291
+ metrics:
292
+ - type: bleu
293
+ value: 40.44
294
+ name: Average bleu
295
+ - task:
296
+ type: text-generation
297
+ dataset:
298
+ name: XQuAD
299
+ type: XQuAD
300
+ metrics:
301
+ - type: exact_match
302
+ value: 13.59
303
+ name: Average exact_match
304
+ - type: f1
305
+ value: 23.56
306
+ name: Average f1
307
+ - task:
308
+ type: text-generation
309
+ dataset:
310
+ name: XQuAD_finetuned
311
+ type: XQuAD_finetuned
312
+ metrics:
313
+ - type: exact_match
314
+ value: 49.41
315
+ name: Average exact_match
316
+ - type: f1
317
+ value: 62.93
318
+ name: Average f1
319
+ - task:
320
+ type: text-generation
321
+ dataset:
322
+ name: STS
323
+ type: STS
324
+ metrics:
325
+ - type: spearman
326
+ value: 75.89
327
+ name: Average spearman
328
+ - type: pearson
329
+ value: 76.0
330
+ name: Average pearson
331
+ - task:
332
+ type: text-generation
333
+ dataset:
334
+ name: STS_finetuned
335
+ type: STS_finetuned
336
+ metrics:
337
+ - type: spearman
338
+ value: 86.86
339
+ name: Average spearman
340
+ - type: pearson
341
+ value: 87.05
342
+ name: Average pearson
343
+ - task:
344
+ type: text-generation
345
+ dataset:
346
+ name: XQuAD_EM
347
+ type: XQuAD_EM
348
+ metrics:
349
+ - type: exact_match
350
+ value: 6.55
351
+ name: 0-shot
352
+ - type: exact_match
353
+ value: 38.32
354
+ name: 1-shot
355
+ - type: exact_match
356
+ value: 8.66
357
+ name: 3-shot
358
+ - type: exact_match
359
+ value: 0.84
360
+ name: 5-shot
361
+ - task:
362
+ type: text-generation
363
+ dataset:
364
+ name: XQuAD_F1
365
+ type: XQuAD_F1
366
+ metrics:
367
+ - type: f1
368
+ value: 16.04
369
+ name: 0-shot
370
+ - type: f1
371
+ value: 56.16
372
+ name: 1-shot
373
+ - type: f1
374
+ value: 15.64
375
+ name: 3-shot
376
+ - type: f1
377
+ value: 6.39
378
+ name: 5-shot
379
+ - task:
380
+ type: text-generation
381
+ dataset:
382
+ name: STS_Spearman
383
+ type: STS_Spearman
384
+ metrics:
385
+ - type: spearman
386
+ value: 76.27
387
+ name: 1-shot
388
+ - type: spearman
389
+ value: 75.48
390
+ name: 3-shot
391
+ - type: spearman
392
+ value: 75.92
393
+ name: 5-shot
394
+ - task:
395
+ type: text-generation
396
+ dataset:
397
+ name: STS_Pearson
398
+ type: STS_Pearson
399
+ metrics:
400
+ - type: pearson
401
+ value: 76.76
402
+ name: 1-shot
403
+ - type: pearson
404
+ value: 75.38
405
+ name: 3-shot
406
+ - type: pearson
407
+ value: 75.87
408
+ name: 5-shot
409
+ ---
410
+
411
+ # chrisgru/RoLlama3.1-8b-Instruct-2024-10-09-Q8_0-GGUF
412
+ This model was converted to GGUF format from [`OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09`](https://huggingface.co/OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
413
+ Refer to the [original model card](https://huggingface.co/OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09) for more details on the model.
414
+
415
+ ## Use with llama.cpp
416
+ Install llama.cpp through brew (works on Mac and Linux)
417
+
418
+ ```bash
419
+ brew install llama.cpp
420
+
421
+ ```
422
+ Invoke the llama.cpp server or the CLI.
423
+
424
+ ### CLI:
425
+ ```bash
426
+ llama-cli --hf-repo chrisgru/RoLlama3.1-8b-Instruct-2024-10-09-Q8_0-GGUF --hf-file rollama3.1-8b-instruct-2024-10-09-q8_0.gguf -p "The meaning to life and the universe is"
427
+ ```
428
+
429
+ ### Server:
430
+ ```bash
431
+ llama-server --hf-repo chrisgru/RoLlama3.1-8b-Instruct-2024-10-09-Q8_0-GGUF --hf-file rollama3.1-8b-instruct-2024-10-09-q8_0.gguf -c 2048
432
+ ```
433
+
434
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
435
+
436
+ Step 1: Clone llama.cpp from GitHub.
437
+ ```
438
+ git clone https://github.com/ggerganov/llama.cpp
439
+ ```
440
+
441
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
442
+ ```
443
+ cd llama.cpp && LLAMA_CURL=1 make
444
+ ```
445
+
446
+ Step 3: Run inference through the main binary.
447
+ ```
448
+ ./llama-cli --hf-repo chrisgru/RoLlama3.1-8b-Instruct-2024-10-09-Q8_0-GGUF --hf-file rollama3.1-8b-instruct-2024-10-09-q8_0.gguf -p "The meaning to life and the universe is"
449
+ ```
450
+ or
451
+ ```
452
+ ./llama-server --hf-repo chrisgru/RoLlama3.1-8b-Instruct-2024-10-09-Q8_0-GGUF --hf-file rollama3.1-8b-instruct-2024-10-09-q8_0.gguf -c 2048
453
+ ```