Initial GGML model commit
Browse files
README.md
CHANGED
@@ -1,10 +1,32 @@
|
|
1 |
---
|
2 |
inference: false
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
model_creator: StabilityAI
|
4 |
model_link: https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b
|
5 |
model_name: Stablecode Instruct Alpha 3B
|
6 |
model_type: gpt-neox
|
7 |
quantized_by: TheBloke
|
|
|
|
|
8 |
---
|
9 |
|
10 |
<!-- header start -->
|
@@ -37,15 +59,13 @@ Please note that these GGMLs are **not compatible with llama.cpp, text-generatio
|
|
37 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/stablecode-instruct-alpha-3b-GGML)
|
38 |
* [StabilityAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b)
|
39 |
|
40 |
-
## Prompt template:
|
41 |
|
42 |
```
|
43 |
-
|
44 |
-
|
45 |
-
### Instruction:
|
46 |
{prompt}
|
47 |
|
48 |
-
###
|
49 |
```
|
50 |
|
51 |
<!-- compatibility_ggml start -->
|
@@ -110,4 +130,75 @@ Thank you to all my generous patrons and donaters!
|
|
110 |
|
111 |
# Original model card: StabilityAI's Stablecode Instruct Alpha 3B
|
112 |
|
113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
+
language:
|
4 |
+
- code
|
5 |
+
license: other
|
6 |
+
model-index:
|
7 |
+
- name: stabilityai/stablecode-instruct-alpha-3b
|
8 |
+
results:
|
9 |
+
- dataset:
|
10 |
+
name: HumanEval
|
11 |
+
type: openai_humaneval
|
12 |
+
metrics:
|
13 |
+
- name: pass@1
|
14 |
+
type: pass@1
|
15 |
+
value: 0.2689
|
16 |
+
verified: false
|
17 |
+
- name: pass@10
|
18 |
+
type: pass@10
|
19 |
+
value: 0.3618
|
20 |
+
verified: false
|
21 |
+
task:
|
22 |
+
type: text-generation
|
23 |
model_creator: StabilityAI
|
24 |
model_link: https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b
|
25 |
model_name: Stablecode Instruct Alpha 3B
|
26 |
model_type: gpt-neox
|
27 |
quantized_by: TheBloke
|
28 |
+
tags:
|
29 |
+
- causal-lm
|
30 |
---
|
31 |
|
32 |
<!-- header start -->
|
|
|
59 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/stablecode-instruct-alpha-3b-GGML)
|
60 |
* [StabilityAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b)
|
61 |
|
62 |
+
## Prompt template: StableCode
|
63 |
|
64 |
```
|
65 |
+
###Instruction:
|
|
|
|
|
66 |
{prompt}
|
67 |
|
68 |
+
###Response:
|
69 |
```
|
70 |
|
71 |
<!-- compatibility_ggml start -->
|
|
|
130 |
|
131 |
# Original model card: StabilityAI's Stablecode Instruct Alpha 3B
|
132 |
|
133 |
+
# `StableCode-Instruct-Alpha-3B`
|
134 |
+
|
135 |
+
## Model Description
|
136 |
+
|
137 |
+
`StableCode-Instruct-Alpha-3B` is a 3 billion parameter decoder-only instruction tuned code model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey.
|
138 |
+
|
139 |
+
## Usage
|
140 |
+
The model is intended to follow instruction to generate code. The dataset used to train the model is formatted in Alpaca format.
|
141 |
+
Get started generating code with `StableCode-Instruct-Alpha-3B` by using the following code snippet:
|
142 |
+
|
143 |
+
```python
|
144 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
145 |
+
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-instruct-alpha-3b")
|
146 |
+
model = AutoModelForCausalLM.from_pretrained(
|
147 |
+
"stabilityai/stablecode-instruct-alpha-3b",
|
148 |
+
trust_remote_code=True,
|
149 |
+
torch_dtype="auto",
|
150 |
+
)
|
151 |
+
model.cuda()
|
152 |
+
inputs = tokenizer("###Instruction\nGenerate a python function to find number of CPU cores###Response\n", return_tensors="pt").to("cuda")
|
153 |
+
tokens = model.generate(
|
154 |
+
**inputs,
|
155 |
+
max_new_tokens=48,
|
156 |
+
temperature=0.2,
|
157 |
+
do_sample=True,
|
158 |
+
)
|
159 |
+
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
160 |
+
```
|
161 |
+
|
162 |
+
## Model Details
|
163 |
+
|
164 |
+
* **Developed by**: [Stability AI](https://stability.ai/)
|
165 |
+
* **Model type**: `StableCode-Instruct-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture.
|
166 |
+
* **Language(s)**: Code
|
167 |
+
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
|
168 |
+
* **License** : Model checkpoints are licensed under the [StableCode Research License](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b/blob/main/LICENSE.md) Copyright (c) Stability AI Ltd. All Rights Reserved
|
169 |
+
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
|
170 |
+
|
171 |
+
### Model Architecture
|
172 |
+
|
173 |
+
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|
174 |
+
|----------------|-------------|--------|-------|-----------------|
|
175 |
+
| 2,796,431,360 | 2560 | 32 | 32 | 4096 |
|
176 |
+
|
177 |
+
|
178 |
+
* **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
|
179 |
+
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
|
180 |
+
* **Bias**: LayerNorm bias terms only
|
181 |
+
|
182 |
+
## Training
|
183 |
+
|
184 |
+
`StableCode-Instruct-Alpha-3B` is the instruction finetuned version on [StableCode-Completion-Alpha-3B](https://huggingface.co/stabilityai/stablecode-completion-alpha-3b) with code instruction datasets.
|
185 |
+
|
186 |
+
## Use and Limitations
|
187 |
+
|
188 |
+
### Intended Use
|
189 |
+
|
190 |
+
StableCode-Instruct-Alpha-3B independently generates new code completions, but we recommend that you use StableCode-Instruct-Alpha-3B together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code.
|
191 |
+
|
192 |
+
### Limitations and bias
|
193 |
+
|
194 |
+
This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm.
|
195 |
+
|
196 |
+
## How to cite
|
197 |
+
|
198 |
+
```bibtex
|
199 |
+
@misc{StableCodeInstructAlpha,
|
200 |
+
url={[https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b)},
|
201 |
+
title={Stable Code Instruct Alpha},
|
202 |
+
author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian}
|
203 |
+
}
|
204 |
+
```
|