|
--- |
|
license: other |
|
inference: false |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- transformers |
|
- gguf |
|
- imatrix |
|
- stable-code-3b |
|
- stabilityai |
|
--- |
|
Quantizations of https://huggingface.co/stabilityai/stable-code-3b |
|
|
|
# From original readme |
|
|
|
## Usage |
|
|
|
Get started generating text with `stable-code-3b` by using the following code snippet: |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-3b") |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"stabilityai/stable-code-3b", |
|
torch_dtype="auto", |
|
) |
|
model.cuda() |
|
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to(model.device) |
|
tokens = model.generate( |
|
**inputs, |
|
max_new_tokens=48, |
|
temperature=0.2, |
|
do_sample=True, |
|
) |
|
print(tokenizer.decode(tokens[0], skip_special_tokens=True)) |
|
``` |
|
|
|
### Run with Fill in Middle (FIM) ⚡️ |
|
|
|
<details> |
|
<summary> Click to expand </summary> |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-3b") |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"stabilityai/stable-code-3b", |
|
torch_dtype="auto", |
|
attn_implementation="flash_attention_2", |
|
) |
|
model.cuda() |
|
inputs = tokenizer("<fim_prefix>def fib(n):<fim_suffix> else:\n return fib(n - 2) + fib(n - 1)<fim_middle>", return_tensors="pt").to(model.device) |
|
tokens = model.generate( |
|
**inputs, |
|
max_new_tokens=48, |
|
temperature=0.2, |
|
do_sample=True, |
|
) |
|
print(tokenizer.decode(tokens[0], skip_special_tokens=True)) |
|
``` |
|
|
|
</details> |
|
|
|
### Run with Flash Attention 2 ⚡️ |
|
|
|
<details> |
|
<summary> Click to expand </summary> |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-3b", trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"stabilityai/stable-code-3b", |
|
trust_remote_code=True, |
|
torch_dtype="auto", |
|
+ attn_implementation="flash_attention_2", |
|
) |
|
model.cuda() |
|
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to(model.device) |
|
tokens = model.generate( |
|
**inputs, |
|
max_new_tokens=48, |
|
temperature=0.2, |
|
do_sample=True, |
|
) |
|
print(tokenizer.decode(tokens[0], skip_special_tokens=True)) |
|
``` |
|
|
|
</details> |