NikolayKozloff commited on
Commit
f5b243a
1 Parent(s): 2b73bb2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ tags:
5
+ - code
6
+ - llama-cpp
7
+ - gguf-my-repo
8
+ base_model: ibm-granite/granite-3b-code-base
9
+ datasets:
10
+ - bigcode/commitpackft
11
+ - TIGER-Lab/MathInstruct
12
+ - meta-math/MetaMathQA
13
+ - glaiveai/glaive-code-assistant-v3
14
+ - glaive-function-calling-v2
15
+ - bugdaryan/sql-create-context-instruction
16
+ - garage-bAInd/Open-Platypus
17
+ - nvidia/HelpSteer
18
+ metrics:
19
+ - code_eval
20
+ pipeline_tag: text-generation
21
+ inference: false
22
+ model-index:
23
+ - name: granite-3b-code-instruct
24
+ results:
25
+ - task:
26
+ type: text-generation
27
+ dataset:
28
+ name: HumanEvalSynthesis(Python)
29
+ type: bigcode/humanevalpack
30
+ metrics:
31
+ - type: pass@1
32
+ value: 51.2
33
+ name: pass@1
34
+ - type: pass@1
35
+ value: 43.9
36
+ name: pass@1
37
+ - type: pass@1
38
+ value: 41.5
39
+ name: pass@1
40
+ - type: pass@1
41
+ value: 31.7
42
+ name: pass@1
43
+ - type: pass@1
44
+ value: 40.2
45
+ name: pass@1
46
+ - type: pass@1
47
+ value: 29.3
48
+ name: pass@1
49
+ - type: pass@1
50
+ value: 39.6
51
+ name: pass@1
52
+ - type: pass@1
53
+ value: 26.8
54
+ name: pass@1
55
+ - type: pass@1
56
+ value: 39.0
57
+ name: pass@1
58
+ - type: pass@1
59
+ value: 14.0
60
+ name: pass@1
61
+ - type: pass@1
62
+ value: 23.8
63
+ name: pass@1
64
+ - type: pass@1
65
+ value: 12.8
66
+ name: pass@1
67
+ - type: pass@1
68
+ value: 26.8
69
+ name: pass@1
70
+ - type: pass@1
71
+ value: 28.0
72
+ name: pass@1
73
+ - type: pass@1
74
+ value: 33.5
75
+ name: pass@1
76
+ - type: pass@1
77
+ value: 27.4
78
+ name: pass@1
79
+ - type: pass@1
80
+ value: 31.7
81
+ name: pass@1
82
+ - type: pass@1
83
+ value: 16.5
84
+ name: pass@1
85
+ ---
86
+
87
+ # NikolayKozloff/granite-3b-code-instruct-Q8_0-GGUF
88
+ This model was converted to GGUF format from [`ibm-granite/granite-3b-code-instruct`](https://huggingface.co/ibm-granite/granite-3b-code-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
89
+ Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3b-code-instruct) for more details on the model.
90
+ ## Use with llama.cpp
91
+
92
+ Install llama.cpp through brew.
93
+
94
+ ```bash
95
+ brew install ggerganov/ggerganov/llama.cpp
96
+ ```
97
+ Invoke the llama.cpp server or the CLI.
98
+
99
+ CLI:
100
+
101
+ ```bash
102
+ llama-cli --hf-repo NikolayKozloff/granite-3b-code-instruct-Q8_0-GGUF --model granite-3b-code-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
103
+ ```
104
+
105
+ Server:
106
+
107
+ ```bash
108
+ llama-server --hf-repo NikolayKozloff/granite-3b-code-instruct-Q8_0-GGUF --model granite-3b-code-instruct.Q8_0.gguf -c 2048
109
+ ```
110
+
111
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
112
+
113
+ ```
114
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-3b-code-instruct.Q8_0.gguf -n 128
115
+ ```