File size: 4,214 Bytes
b710bea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
base_model: ibm-granite/granite-3b-code-base-128k
datasets:
- codeparrot/github-code-clean
- bigcode/starcoderdata
- open-web-math/open-web-math
- math-ai/StackMathQA
library_name: transformers
license: apache-2.0
metrics:
- code_eval
pipeline_tag: text-generation
tags:
- code
- granite
- llama-cpp
- gguf-my-repo
inference: false
model-index:
- name: granite-3b-code-base-128k
  results:
  - task:
      type: text-generation
    dataset:
      name: HumanEvalSynthesis (Python)
      type: bigcode/humanevalpack
    metrics:
    - type: pass@1
      value: 36.0
      name: pass@1
      verified: false
    - type: pass@1
      value: 30.5
      name: pass@1
      verified: false
    - type: pass@1
      value: 22.4
      name: pass@1
      verified: false
    - type: pass@1
      value: 19.9
      name: pass@1
      verified: false
  - task:
      type: text-generation
    dataset:
      name: RepoQA (Python@16K)
      type: repoqa
    metrics:
    - type: pass@1 (thresh=0.5)
      value: 40.0
      name: pass@1 (thresh=0.5)
      verified: false
    - type: pass@1 (thresh=0.5)
      value: 36.0
      name: pass@1 (thresh=0.5)
      verified: false
    - type: pass@1 (thresh=0.5)
      value: 37.0
      name: pass@1 (thresh=0.5)
      verified: false
    - type: pass@1 (thresh=0.5)
      value: 27.0
      name: pass@1 (thresh=0.5)
      verified: false
    - type: pass@1 (thresh=0.5)
      value: 29.0
      name: pass@1 (thresh=0.5)
      verified: false
  - task:
      type: text-generation
    dataset:
      name: LCC (Balanced)
      type: lcc
    metrics:
    - type: Exact Match@4K
      value: 54.6
      name: Exact Match@4K
      verified: false
    - type: Exact Match@8K
      value: 56.8
      name: Exact Match@8K
      verified: false
    - type: Exact Match@16K
      value: 52.2
      name: Exact Match@16K
      verified: false
    - type: Exact Match@32K
      value: 57.8
      name: Exact Match@32K
      verified: false
  - task:
      type: text-generation
    dataset:
      name: RepoBench-P (Balanced)
      type: repobench
    metrics:
    - type: Exact Match@4K
      value: 39.8
      name: Exact Match@4K
      verified: false
    - type: Exact Match@8K
      value: 46.8
      name: Exact Match@8K
      verified: false
    - type: Exact Match@16K
      value: 43.1
      name: Exact Match@16K
      verified: false
    - type: Exact Match@32K
      value: 45.3
      name: Exact Match@32K
      verified: false
---

# ijohn07/granite-3b-code-base-128k-Q8_0-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3b-code-base-128k`](https://huggingface.co/ibm-granite/granite-3b-code-base-128k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3b-code-base-128k) for more details on the model.

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo ijohn07/granite-3b-code-base-128k-Q8_0-GGUF --hf-file granite-3b-code-base-128k-q8_0.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo ijohn07/granite-3b-code-base-128k-Q8_0-GGUF --hf-file granite-3b-code-base-128k-q8_0.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ijohn07/granite-3b-code-base-128k-Q8_0-GGUF --hf-file granite-3b-code-base-128k-q8_0.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo ijohn07/granite-3b-code-base-128k-Q8_0-GGUF --hf-file granite-3b-code-base-128k-q8_0.gguf -c 2048
```